paint-brush
Quantum Contingencies in Cryptography: A Short Primerby@mot.physics
683 reads
683 reads

Quantum Contingencies in Cryptography: A Short Primer

by Adam CorboJuly 19th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Adam Corbo: Quantum Contingencies in Cryptography: A Short Primer on Quantum Computers. He says quantum computers present a radical new technology for performing complex calculations and computations with an efficiency so potentially vast that its impacts are scarcely imaginable to the majority of today’s coders and hardware developers. In short, the advent of quantum computing poses a direct and substantial threat to modern cryptography, which also happens to be the very backbone of the Internet as we know it. He argues that quantum computers could shorten the length of time required to perform certain operations (such as factoring large numbers) from centuries down to seconds.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Quantum Contingencies in Cryptography: A Short Primer
Adam Corbo HackerNoon profile picture

Are we at the cradle of a quantum age?

Quantum computers constitute a radical new technology for performing complex calculations and computations with an efficiency so potentially vast that its impacts are scarcely imaginable to the majority of today’s coders and hardware developers. Sure, you may have heard this before.

Indeed, the advent of most truly substantial (and thus usually disruptive) categories of tech - from the abacus to the telegraph to the Internet to blockchain - tend to be accompanied by choirs of utopian and apocalyptic predictions. 

Moreover, the manic novelty-thirsting pageantry of today’s globalized markets vests developers and branders with a wearying prerogative to frame every new thing as the most groundbreaking invention since the home console, the smartphone, or the Internet itself. Given the saturation of claims, even once-faithful attendees of the church of silicon innovation may compulsively tune away from another prophecy of cyber-doomsdays and Edens wrought by tomorrow’s gadgets. 

So, we will begin our discussion by making the following clear: what quantum computing represents and portends is not just “faster hardware” and another wave of cool home tech, but likely the biggest paradigm shift in computational technology since its very emergence. To be fair, its impacts may still take a great deal of time to reach us, not unlike the Internet, which took decades to journey from Pentagon-funded labs to our living rooms, favorite coffee shops, and jean pockets. Nonetheless, it may be a useful and surprising exercise to recognize that much of what the Internet eventually became in the world could be very directly traced to the structures and capacities already in place during its sheltered “childhood”. 

More than that, many of the Web’s specific applications were already being actively, if vaguely, envisioned early on. As such, long before Julian Assange first learned how to say “command prompt”, and decades before dial up, chat-rooms, and the movie Hackers, most of the foundations of and prerequisites for today’s online world already existed, and some of their radical applications, utilities, and contingencies quite foreseeable. In other words, barring the wild-card impact along the way from a few handfuls of visionary teams and minds, much of our online world’s destiny could be discerned by those standing next to its cradle.

Yet, just as a special recipe book is no true guarantee of a successful restaurant, the path from there to now was still filled with contingencies and surprises. And it’s likely that the history of quantum computing would hold no fewer of these. It should be fairly clear by now that we see an immanent parallels between these two paradigmal technologies. But that’s not the only reason for our extended Internet-centric preamble. Besides this nebulous analogical relation, the prospects of quantum computing also bear a far more direct and immanent significance to the early Internet, one which ties into the crux of our discussion. In short, the advent of quantum computing poses a direct and substantial threat to modern cryptography, which also happens to be the very backbone of the Internet as we know it, which will be the focus of this article.

Technical Primer on Quantum Computing

Pictured: Richard Feynman, a Caltech Physics professor of some renown who proposed the model of what we know today as a quantum computer.

Quantum computers present a radical new technology for performing complex calculations and computations that could shorten the length of time required to perform certain operations (such as factoring large numbers) from centuries down to seconds. Proposed by physicist Richard Feynman in 1981, a quantum computer exploits special physical properties - such as quantum entanglement, interference and superposition - to perform numerous computational operations simultaneously. While a normal Turing machine (a “classical computer”) can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once. If this may seem like a minute distinction to a layperson just technical enough to recall the quad-core CPU in their smartphone, it may be useful to note that, given the operative distinction above, the workings of even the most powerful cell phone (or, for that matter, supercomputer) in the world are more similar to those of a calculator than of a quantum computer.

Though the quantum computers of today remain difficult and costly to engineer and, in regards to most computational tasks, would still lose out even to classical computers far beneath the current processing thresholds, the sheer potential opened up by the operative simultaneity of quantum computers extends incalculably further than the gradually broadening limits of classical computation. 

With that being said, the persisting challenges of implementing quantum computing technology continue to defer these actualization prospects, which need not only be starry and utopian, but also enable certain unnerving contingencies. Moreover, some of these stem not from the potential computational efficiency of quantum machines, but from the sheer novelty of their operative principles, which may provide ways to undercut some of the most widespread digital security frameworks in use today. Granted, the scarcity and limitation of quantum technologies at this time make the present existence of practical quantum adversaries, at the very least, unlikely. Nonetheless, many of the methods that practical quantum computers would be able to leverage are already known and can be simulated on classical computers today or run in small scales on extant quantum computers.

So even though practical quantum adversaries [probably] don’t exist today, many of the methods that practical quantum computers would be able to leverage are already known and can be simulated on classical computers today or run in small scales on extant quantum computers. (If you’d like to run a program on an actual quantum computer right now, check out the IBM Quantum Experience) Well-studied algorithms that could be implemented on quantum computers include Simon's Algorithm, Shor’s algorithm, and Grover’s algorithm. In this article, we’ll describe how these algorithms can be used by a theoretical quantum adversary, and discuss compelling implications for blockchain mechanisms and cryptographic security. 

As such it must be mentioned that unlike the early Internet, the first seeds of quantum development may already be germinating well outside of governmental, academic, and corporate labs. But if open-source democratization of developmental access may indeed aid in compacting the timeline and expanding the early scope of fruitfully implemented quantum technology, it may also enable or expedite the emergence of systemically destabilizing contingencies. Some of these contingencies are already well-foreseeable; namely, those associated with an impending emergence of quantum machines powerful enough to expediently implement algorithmic methods of undermining the most widespread cybersecurity frameworks used today.

Before we go into these methods and vulnerabilities, below is a primer on some terms used in the fields of quantum computing. If you are already familiar with these terms feel free to skip the rest of this section. 

Entanglement: A physical phenomenon that occurs when a pair or group of particles is generated, interacts or shares spatial proximity in a way such that the quantum state of each particle cannot be described independently of the state of the others. A phenomenon which a bewildered Einstein referred to as “spooky action at a distance”, quantum entanglement constitutes one of nature’s most mystifying magic tricks: instantaneous correspondence (rather than causal transfer) of qualitative states.

Qubit: The basic unit of information used in a quantum computer. A qubit is distinct from a bit of information in that it can exist in a state of superposition between 0 and 1 simultaneously. Multiple qubits can be entangled with each other at once. Hence, when you perform an operation on one qubit, it could instantaneously perform operations on all the other qubits. With a multitude of qubits, this can translate into astronomical levels of computational power. 

Quantum Volume: The Quantum Volume, QV,  method quantifies the largest random circuit of equal width and depth that the computer successfully implements. It is formally defined as a single-number metric that can be measured using a concrete protocol on near-term quantum computers of modest size. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes. This metric was developed by IBM and is the metric we will use throughout this article as distinct from the metric of the number of accessible qubits in order to state the computational power of quantum hardware.  

A brief introduction to cryptography:

Pictured: An example of the Caesar cipher with an alphabetical left shift of 3.

Cryptography has been present in some form or another in human societies for thousands of years. One of the earliest known forms of cryptography is the so-called “Caesar Cipher”: an “analog” encryption method devised over 2000 ago by general and later Emperor Julius Caesar to secure the confidentiality of his correspondence. The Caesar Cipher utilizes the following methodology: each alphabet letter within a given message is replaced with a character that is some fixed (and predetermined) number of characters away from it in the alphabet (in this instance, the Latin alphabet). 

To decrypt such a message, Caesar’s epistolary addressee would need to apply to each character therein a numerical key designating the precise degree of its displacement from the encrypted character. Such a method could be utilized with several degrees of “encryption strength”. A heavily encrypted message would designate a separate “shift value” for each character within it, while a lightly encrypted message would correspond to a master key. In the latter case, it would be fairly easy to decrypt the message given mere awareness of the alphabet and some trial and error.

For example, if one were to use the Caesar Cipher with a 3-character right shift “master key” on the message “Hello world!” the then encrypted message would be “Khoor zruog!”. Yet, due to the fairly small and finite number of letters in the English, as well as in Latin alphabets, a (Romanesque) mail-hacker would simply need to go through each of the few dozen possible shift values, until one of them fits the message, and thus unlock it. However, if every character of a message were to have its own value, the process would become exponentially more complex and depending on the length of the message and potential use of multiple numerical keys for reused characters, could become fairly resilient to decryption, even by experienced mail-hackers.

Variations on the above form of encryption (and even exact reenactments thereof) would play a significant role throughout history even up until the present day. In April 2006, the fugitive Mafia boss Bernardo Provenzano was captured by Sicilian authorities in large part as a result of encoding his messages to subordinates via a clumsy variation on the Ceasar Cipher, a rudimentary word-scramble easily cracked by a team of amused cops. Perhaps imagining himself as clever as Julius Caesar, the don gave the world neither a Rubicon nor a Rubik’s cube with his hapless encoding attempt, but merely proved that, for better or worse, not every downfall requires a Brutus. Of course, given the sophisticated cryptography technologies of today, the Ceasar Cipher had largely been resigned to the world of historical curios. However, it’s not impossible that someone might find another use for it (cipher poetry, perchance?). Readers particularly drawn to history and/or puzzled can play around with the Ceasar Cipher here

But let’s now give the archaisms some rest and return to the 21st century: a world of online networks. Indeed, the Internet could be viewed as a structural carcass situating much of what characterizes our era. Both the base and the superstructure, the Internet serves as a global home of financial economies and cultural meanings alike. And where we frame the Internet as such a foundational structure, we could conceive of cryptography as nails, screws and rivets therein, and as the small, internal, or otherwise hidden components holding the whole structure together. If one were to suddenly remove these vital, if also spectral, components, then the whole structure would become malleable and shaky. Presently, nearly every bit, byte, and blog of internet traffic relies on complex cryptographic dynamics, while behind the digital scenes, protocols first devised and implemented back when John Lennon was still alive continue to undergo various forms of encryption and decryption out of an average user’s scope of perception, or even awareness. Emails, credit card data, digital identity all depend on cryptographic standards of security.   

To give some of our readers a brief primer to the Internet’s shadowy internal world of cryptography, as well as to make further elements of our discussion clearer, we’ve rendered a sequel to our quantum glossary above. Below, we explicate a few terms key to the field indeed known for holding (and designing) most of our keys: cryptography. But if you are already familiar with the field, feel free to skip the rest of this section.

A TINY CIPHER GLOSSARY:

Plaintext: A message that can be encrypted or is the result of decryption, “Hello world!” is plaintext

Ciphertext: Gibberish that comes out from encrypting plaintext. If you encrypt the plaintext “Hello world!”, “Khoor zruog!” could be ciphertext, if you decrypted this the plaintext of the ciphertext would also be “Hello world!”.

Block Cipher: The deterministic algorithm in which plaintext can be converted into ciphertext. If I have the plaintext “Hello world!” and I apply my block cipher (in this case the Caesar cipher as used above) to “Hello world!” the result of this will be the ciphertext “Khoor zruog!”.

Hash Function: A hash function is something that takes in an input of symbols, uses a complex method to create a scrambled code from the symbols, and then spits this out as a unique result. The result that gets spit out is called the hash digest. Hash digests are a fixed length, independent of what was initially put into the hash function.  Popular hash functions include SHA-256 and SHA-3. If I put the word “dog” through the hash function SHA-256, the hash digest of “dog” is: “cd6357efdd966de8c0cb2f876cc89ec74ce35f0968e11743987084bd42fb8944”
You can play around with the SHA-256 hash function here.

Pre-Image: The pre-image is what you would get if you tried to reverse a hash function on a hash. In the example above, the pre-image of
“cd6357efdd966de8c0cb2f876cc89ec74ce35f0968e11743987084bd42fb8944”
Would be “dog” for the SHA-256 hash function. A pre-image is what came before you tried to apply a hash function to a piece of plaintext. Hash functions are usually designed such that finding the pre-image of a hash digest is supposed to be difficult.

Protocol: The set of rules governing the transfer of data between two or more devices. If a digital domain utilizing a particular kind of cryptography is like a country, then the protocol is its legislative charter, or a bill of laws and protections. Naturally, every “machine of loving grace” has its own protocol. So, what does a computer say to a cop? “Hey man, I have my protocols!”

Asymmetric Cryptography: A form of cryptography that relies upon a public and private key pair. The public key is formed from the private key and is accessible to anyone, but is designed such that it should be hard to derive the private key used to generate it. The private key is designed to only be known to the owner of the private key and not widely distributed and known like the public key. In such a system, any person can encrypt a message using the receiver's public key, but that encrypted message can only be decrypted with the receiver's private key. 

Shor’s Algorithm: In Quantum Swan’s Shadow

Pictured: An example of modulo arithmetic multiplication tables. Finding the "period" of which a modulo function repeats itself is essentially how Shor's algorithm works in breaking RSA cryptography.

Suppose I give you a calculator, and ask which two prime integers can be multiplied together to produce the number 176399? Could you tell me in under 30 seconds? Presuming you’re not a mathematical savant with photographic memory, probably not. In this particular instance, the problem happens to be as seemingly straightforward as identifying a pair of prime (integer) suspects behind a product, yet this may require a program to sift through countless permutations, lining up pair after pair of prime integers, until one of the permutations finally matches the inputted value. There might be other more lazy methods (and by lazy I mean efficient and requiring less steps) In this way, even an easily conceptualized and definitively solvable mathematical problem could involve a mass of operations, which even a fast computer could take a while to complete. In our example, we used a six-digit number. Far larger figures, however, may occupy our fastest and most advanced supercomputers for years, decades, centuries, or even the lifetime of the universe.

But what if, rather than an imposing six-digit figure, you were instead given two three-digit integers (e.g. 419 and 421) and asked to multiply them using the calculator? Savant or not, it would take you only seconds to arrive at the result: 176399. 

The asymmetry of difficulty in finding which two prime numbers can make up a larger number in contrast to simply finding the result of multiplying a pair of prime numbers forms the basis of the majority of modern cryptography in use today. This method is used in the majority of encryption and decryption in nearly all aspects of internet privacy. Cryptocurrency mechanisms rely upon this as a fundamental aspect of their infrastructure in private and public key pairs and in creating a reliably secure system. In effect, this singular math problem serves as the basis for the entire system of Internet-age security. It is the foundation for this security. What if this foundation was pulled away? But we’ll get to that later. First, we will talk a bit more about how the system functions.

Where K is the public key accessible to everyone, G is a generator that generates a public key from a private key, and k is the private key of the user, public keys are generated by K = G*k. The private key is supposed to be secret and only known by its owner. If K is large enough, finding which two prime numbers make it up could take a very long time to find even on a supercomputer.   

Thus, traditionally, forms of encoding based on the instrumental asymmetry inherent to the K = G*k equation are thought of as secure systems, for large enough K values. As such, cybersecurity systems across manifold digital domains rely on the assumption that even advanced computational hardware would require grotesquely lengthy timespans to produce the prime factors of larger figures. Moreover, the above problem, which perhaps most directly describes the foundational logic of the RSA -or the “Rivest-Shamir-Adleman” - encryption protocol (developed in 1976) could be said to be illustrative of modern digital cryptography in a remarkably widespread way. 

The RSA algorithm alone serves as the most broadly utilized public key cryptosystem used in everything from Bitcoin to Gmail. But even if we were to consider commonplace cryptography beyond the RSA protocol, we would see much of it likewise relies on algorithmic functions operatively simple to compute, but asymmetrically challenging to invert, or reverse engineer, in order to produce unknown constituent integers. For instance, very similar logic applies to another widespread protocol: El Gamal. Developed in 1984, El Gamal is - like the RSA - based on the above-detailed problem, the most common articulation of which formally designates it as the Discrete Logarithm Problem (or the DLP).

The way by which a classical computer approaches trying to solve the DLP for a large K value is not too dissimilar from the way the one might go about decrypting a complex version of the Ceasar Cipher. If you recall, to solve the DLP, one needs to discern the prime factors of a number. And the classical computer’s way of trying to do so is essentially a startlingly inelegant form of trial and error, also known as the brute force method. 

To enact the brute force method, the machine merely guesses a number, then checks whether or not it happens to be the sought-after prime factor. And if not, then it tries another one… And another one… And on, and on, and on… And if we were to talk about timescales alone, we might mention that factoring a 240-digit number would occupy the average computer of today for over 2000 years. Granted, there also exist various methods of pooling the processing power of numerous computers towards collaborative decryption. This can indeed produce far more expedient (and thus effective) results, depending on the scale of thus-mobilized hardware infrastructures.

For instance, the same 240-digit number requiring an average computer 2000+ years to factor, could be taken apart by 2000 such computers working in parallel and highly optimized only about a year. Of course, this might bring up the question of whoever could even mobilize such a computational army, and for what purposes? If such mobilization becomes more plausible in our era of cloud computing, fast bandwidths, and various forms of crowd-sourcing, it nonetheless remains difficult to imagine anyone but a state apparatus (or perhaps a piece of clever malware) harnessing such a degree of computing power. But before one considers musing about rogue state-operated “server farms”, they ought to recall that given the present sophistication of encryption algorithms and the above-discussed limits of classical computing, not even vast-scale computational infrastructure could guarantee reliable decryption efforts. 

After all, wherever it might be dealing with a particularly large number, the computer might continue testing numbers practically ad infinitum. Now, imagine that we’re dealing with a K value quite a bit larger than mere 240 digits, but closer to, say, 1000 digits in length. Such a number would (in computer language) translate to a 3000-bit encryption key. As the popular thought experiment often used to illustrate DLP’s resilient domination of classical computing, it is widely estimated that decrypting a key of such a length would require nothing less than a classical computer constructed from every atom in the universe. Talk about the scale of the CPU alone! And with that thought alone, we send the idea of “server farms'' out to the pastures… At least for now.

For the above reasons, and for many decades, the DLP has been widely assumed to be an nearly impenetrably secure basis for digital security. These days, however, this assumption is rapidly becoming problematized, if not altogether overturned. And it’s not farm/cloud-pooling of CPU’s which threatens it, but rather the possibility of implementing Shor’s Algorithm on an advanced-enough quantum computer that grows ever more imminent. Shor’s Algorithm is a method of finding the prime factors of large integers, which can be performed with exponentially fewer steps by the means of a quantum computer than what would be necessary while using the most expedient possible methods available on a classical computer. 

Shor’s Algorithm was developed in 1994 by Peter Shor at Bell labs. In 2001, a group at IBM successfully implemented Shor’s Algorithm using 7 qubits and was able to factor the number 15 into 3 and 5. It’s a far cry from breaking modern cryptography, but the fact that it could be accomplished as a proof-of-concept demonstrates that with enough error-correcting qubits and given enough advances in hardware, it could one day become implementable to solve for K-values of nearly any length.

With enough coherent qubits and quantum volume processing power, a quantum computer running Shor’s Algorithm can find the prime factors of a large integer that would take centuries for a classical computer to find in a little over a minute. (Finding the prime factors of a 300 digit long number would take about 10^30 operations on a classical computer, while Shor’s Algorithm could perform this same computation in just about 100,000 operations.)

Going back to the above example, let’s once more suppose that a public key K is equal to G*k. All that would be required to produce the private key (k) - other than an advanced quantum computer, of course - would be running a few iterations of Shor’s Algorithm to solve for k from K. Let’s consider some possible implications of any so-called “rogue” party suddenly acquiring the ability to produce a private key k. For instance, if the key is used to secure a Bitcoin address, then all that a malicious decrypter would need to do in order to gain access to a person’s tokens would be to produce from a public key a personal private key used to generate it.

It’s fairly clear that if a quantum computer which could efficiently run Shor’s Algorithm came online tomorrow, then the very backbone of privacy and security for the Internet, finance, as many other infrastructures relied-on by much of the modern world could be severely compromised or even destroyed. Such an eventuality is known as a “black swan event”. Black swan events are characterized by their extreme rarity, their severe impact, as well as widespread insistence that they were obvious enough to have been preventable in hindsight.  

With that being said, although an effective malicious engagement of Shor’s Algorithm could indeed undermine present structures of cryptography, thus constituting one of the most potentially severe contingencies of the quantum computing era, it also remains one of the hardest algorithms to implement effectively on a quantum machine. Thousands, or even quite possibly millions of noisy qubits would likely be required in order to run Shor’s Algorithm at a scale requisite toward such dramatic potentialities. 

As such, some ameliorating factors in respect to this algorithm’s contingencies are served by the massive technical requirements of its deployment, as well as the long developmental time-frames likely stemming from this challenge. This means that any plausible deployment of Shor’s Algorithm which might pose a threat to modern cryptography is, by most estimates, at least a decade or more away from happening. Still, it might be worth asking whether a decade is long enough to change some of the Internet’s basic architectures, or even to make it clear that such changes are not only warranted, but pressing. In any case, we ought to begin considering alternatives to current privacy mechanisms, and to do so now, rather than after the fact.

Grover’s Algorithm: Cracking the Code’s Code

Pictured: Quadratic growth folding. Each tile represents a single iteration of Grover's algorithm on a quantum computer while the number attached represents the total number required on a classical computer to find a marked value in an unordered set of data. For large numbers the difference is dramatic.

Suppose I have a password I use for a website. Let’s say I don’t want this password to be known by the website administrator, but obviously the administrator needs to know I submitted the correct password. A simple way to achieve this would be to first hash the password with an agreed upon hash function, then send the resultant hash digest over to the website to see if it verifies what they have on record. If my password is “tomato” and I hash it with the SHA-256 hash function, the resultant hash digest would be:  “5ed728c2fa5d767bc6c1ec6a732db1e37c343be46913e6498d340f7782691f14”. I then send this hash digest over to the website, they see it matches the hash digest for my password they have on record, and thus it verifies my identity while still preserving the security of my password, everybody is happy. 

However, perhaps lets say the website administrator has malicious intent and wants to find out what your password is from the hash digest you’ve sent them. One way to do this would be to find the “pre-image” of the hash digest that was sent. SHA-256 is particularly designed so that it should be hard to reverse. That being said, even though it is difficult to reverse there is still a reliable brute force method to undo the hash function and find the pre-image, and that is to simply input symbols at random into the appropriate hash function and compare the hash digest outputs with the known hash digest.

This method could be thought of in another way as an unstructured search problem.  Meaning, to find someone's password one way to do this would be to go through all the letters in the alphabet (and symbols if they are smart about choosing their password) and their combinations and eventually the combination “tomato” will match the hash digest output that is known. 

Granted, doing so might require hundreds of thousands, if not billions, of instances of checking pre-images for corresponding hash digests. Enacting this would necessitate a guess and check algorithm of a sort that could plausibly be deployed with a classical computer. The processing limits of classical would, however, be a problematic handicap to the procedural demands herein; which would demand the computer parsing through at least half of all the possibilities before finding a match.

Grover’s algorithm is uniquely positioned to do just this sort of activity very quickly and efficiently. Grover’s algorithm is simply a highly efficient search algorithm, for finding a “needle” value in a “haystack” of unsorted data. Grover’s algorithm is sometimes overlooked as a threat to cybersecurity due to it not operating exponentially faster like Shor’s algorithm, but rather quadratically (polynomially). Such that, instead of going through half the possibilities before finding a match, I would only need to go through the square root of all the possibilities before finding a match. However, even with its quadratic polynomial time complexity Grover’s algorithm is still faster than any known classical algorithm that could effectively replicate its functionality.

Perhaps what should be considered as a threat factor in Grover’s algorithm is that it requires far fewer qubits and a lower level of quantum volume processing power to effectively perform its function than Shor’s algorithm. This means that it presents itself as a possible unforeseen threat vector on a much shorter development timescale. Even operating on quadratic polynomial time complexity, Grover’s algorithm still provides a sizable advantage that should not be overlooked when implemented on large search volumes of which the security of hash functions rely upon.

Finding the pre-image of a hash digest that correlates with, say, a password is difficult. There are literally trillions upon trillions of possible permutations that could be run to guess the correct arrangement of symbols, and the shear computational power required to guess the pre-image of a hash digest is usually so large that it is considered secure. Thus, the hacker would not be able to do anything with the hash digest of a password. However, depending on the range of possible permutations, if someone could narrow down the range of possible permutations, it could be possible to use a quantum computer with a far smaller number of quantum volume processing power - smaller number of qubits - to find the pre-image of the hash through Grover’s algorithm. Thus, decrypting the password.

Even without a quantum computer, finding the pre-image of the SHA-512 hash function of a password about 10 characters long and using all 80 possible symbols available by simply going through all possible permutations would take nearly 3 years for one of the most powerful supercomputers currently available. Using Grover’s algorithm on a functional quantum computer could shorten this to months, weeks or even seconds depending on the password complexity and how fast the quantum computer is running.

If you extend this example past just passwords onto the fundamental mechanisms that many cryptocurrency decentralized ledgers rely upon, finding the pre-image of hash functions can expose multitudes of confidential or otherwise secret information, effectively rendering security features that are presented as essential to the infrastructure of these systems as null. This could include stealth addresses, public keys, private keys, ring signatures and even the proof of work mechanism (finding the winning nonce value quickly in likewise fashion). 

There are other applications in which Grover’s algorithm could be used maliciously and present an extra security flaw in what would otherwise be ‘classically secure’ systems. Though, it doesn’t work exponentially faster like other quantum algorithms such as Shor’s, on a practical level it is, theoretically, much easier to implement effectively. 

If Grover’s algorithm does present security flaws in otherwise secure systems, it should be given more immediate attention than the attention given to the more blunt security flaws that may be introduced through utilizing Shor’s algorithm.

Simon’s Algorithm and Further Capabilities:

Pictured: A representation of mathematical chaos. Randomness generated by a deterministic set of rules is "pseudorandom".

There are other algorithms, differing from the algorithms of Shor and Grover, that benefit from efficiency speedups through being run on a quantum computer. Shor’s and Grover’s algorithms simply present the most widely documented examples of ways in which a quantum computer could crack existing cryptography. But there are other algorithms and capabilities either known or waiting to be discovered that could be exploited by a quantum computer to create an advantage in breaking existing or even future cryptography, that could also be used in conjunction with a classical computer. 

The field of quantum computing is a rapidly expanding protean domain. The volume of research papers released on the subject within the past decade has been exponentially larger than any time before. New information and capabilities are being discovered all the time. It is still a relatively new subject, and so there exists much uncharted territory within the field of quantum computing. The limits of what quantum computers could be capable of achieving are still being explored.

As a given, quantum computers by their very nature will always be able to produce random numbers far beyond the capacity of a classical algorithm. Any classical algorithm utilized to create randomness will be, by definition, reliant upon a deterministic set of rules. This operative property and its underlying principles are at the very heart of how quantum mechanics distinguishes itself from classical mechanics: quantum computers, unlike classical machines, function in the realm of genuine randomness, rather than pseudo-randomness. Modern cryptography and cryptocurrency relies upon the generation of random numbers and seeds using pseudorandom number generators in order to ensure security and privacy. True randomness, however, is inherently quantum.

Simon’s Algorithm and Feistel's Cipher

Pictured: A diagram of the Feistel cipher being applied to a piece of plaintext.

In view of such foundational contrasts, it appears even more poignant that several recently-developed quantum algorithms, such as the Bernstein-Vazirani Algorithm, have already been proven to be able to crack existing block ciphers,  grounded in the logic of classical computation. Many of these applications follow a similar method to how Simon’s Algorithm can be used to crack the Feistel cipher: Namely, Simon’s Algorithm works to find the period with which the output of a function repeats itself for a set of given inputs. The output of Simon’s algorithm can be used to set up a system of solvable linear equations that can be used in conjunction with a classical computer to find the existing XOR mask of a function and therefore crack extant cryptographic standards. 

The Feistel cipher is implemented in the construction of many types of block ciphers used in cryptography. It consists of splitting the plaintext (represented in bits) into two halves, and then using a function to alter half of the plaintext and apply an XOR operation to the other. Afterwards, each halves switch places and the opposite operation is performed. Sometimes if you paint a house, one coat of paint isn’t enough to cover the color underneath. It might be necessary to apply a second coat, or two, or three, or more until the initial color is no longer visible. Hence, for several iterations this process of applying a function to half the text, switching halves, and applying a function again is done until the desirable ciphertext is produced. 

Applying the function to half a set of bits for the plaintext input to the Feistel cipher and an XOR operation to the other, eventually the bits will repeat themselves. If you can find how many iterations or “coats of paint” occur until the bits repeat themselves, then it is possible to decode the ciphertext by correctly guessing the key used in each function iteration to encode it. It’s possible to do this classically as well, but classically attempting this could be considered computationally intractable. Whereas, Simon’s algorithm when run on a quantum computer operates on O(n) time complexity; making it exponentially faster. 

The DES (Digital Encryption Standard) used the Feistel cipher. Many other block ciphers employ a similar architecture based off of the Feistel cipher. Similarly, hash functions are composed of block ciphers that can be exposed in this way. It has been proven that a 3-round Feistel scheme is a secure pseudorandom permutation as long as the internal functions are pseudorandom as well. A quantum distinguisher which distinguishes a 3-round Feistel scheme from a random permutation could be used to help reverse many types of block ciphers currently in use and break many forms of modern cryptography. 

Further Concerns: 

Pictured: Diagrams of different wavefunctions representing a particle in an energy well. The inherent uncertainty and hence "randomness" is at the heart of quantum mechanics.

There are other ways in which a working quantum computer could present a vulnerability in an otherwise secure system. Exploiting the variance between the so-called ‘true randomness’ of a quantum computer and the pseudorandomness of a classical computer could be used to expose hidden information in generating the functions and values used in the construction of block ciphers. A potentially powerful tool in differential cryptanalysis to decode private information. Surprisingly, it might already be possible to exploit this advantage with near current hardware.  

It should be noted that the strength of quantum computers primarily comes from their capacity to perform many computations in co-simultaneity by exploiting the properties of quantum entanglement (as we described in our glossary). Granted, a quantum adversary would still require a machine with a substantial number of qubits in order to gain any sizable advantage over a classical computer attempting similar methods. Yet, as the average numbers of available qubits grows, this evolution has the potential to increase the scale, efficiency, and the sheer numbers of co-enacted computations exponentially.   

Beyond what we’ve detailed above, there are many other capabilities and algorithms which could be implemented with far more expediency on a quantum computer. Some of these break existing cryptography, some of them don’t. It is theorized that a working quantum computer that breaks existing cryptography is decades off from being successfully realized. This timeline is based on estimates of how many qubits it takes to implement Shor’s Algorithm effectively. However, this may prove to be a poor metric. Algorithms which require far fewer qubits or quantum volume to implement could nonetheless still be used to break specific existing cryptography schemes. The deployment of these algorithms could be a year, three years, or over a decade away… But it might also be possible that the development timescale for this technology to be able to be effectively implemented in breaking existing cryptographic standards could be far shorter than many of today’s commentators predict.

Implications and Conclusions:

Pictured: A cat.

The implications of the advent of quantum computers for cryptography and blockchain technology range from cataclysmic to inconvenient. Quantum computers will break nearly every practical application of cryptography in use today; making e-commerce and many other digital applications we interact with and depend on in our daily lives insecure, including previously secure messaging systems such as Signal, Telegram, etc. Stealing the private keys to someone’s bitcoin address could be easily achieved using Shor’s algorithm on a large scale quantum computer, making security mechanisms unreliable.

Such a contingency portends only the following possibilities: Either a massive devaluation of Bitcoin tokens or a redevelopment, or even replacement of the essential cryptography on which the whole Bitcoin system bases itself. The same logic, which real world conditions render binary rather than quantum, would also apply to any other current or future cryptocurrency which fails to update or redevelop its cryptography to render it more resistant to quantum adversaries. 

Quantum computers in their initial stages of quantum supremacy, making use of such algorithms as Grover’s algorithm which requires far fewer qubits to implement than Shor’s, might be able to reverse hash functions quickly as one of the first demonstrations of their added capabilities as a threat factor. A simple band-aid solution to this is just double the symmetric key lengths. This is inconvenient but easily executable compared to what would need to be upended replacing cryptography to be resistant to attacks utilizing Shor’s. 

Quantum computers will change our world, much as the personal computer or smartphone revolutions did not long ago. The main takeaway we would like you to draw from this article is an intuition for the types of threats on the horizon, and an understanding that an era dominated by quantum computers may be close to inevitable given the current pace of technological development - the only substantial variable factor is the timeline of development required before their commonplace utilization. But regardless of whether we are looking at a few years or over a decade, post-quantum cryptography needs to be implemented as an industry standard as soon as possible. Preferably before and not after a working quantum computer had already cracked existing cryptography and demolished many of the infrastructures we rely on. Hindsight is 20/20, and the reality of confronting the quantum adversary in the domain of cryptography is no nebulous internal phantom, but a very immanent shadow of vast obscure powers slowly spreading their wings over our digital horizons.

This article was inspired by the authors’ research during a quantum-minded cryptography audit funded by the Monero CCS. To stay tuned for updates, follow us on:

Adam (Twitter, Medium, LinkedIn, GitHub, Instagram)

Mitchell (Twitter, Medium, LinkedIn, GitHub)

By Adam Ryan Corbo, B.A., Mitchell Krawiec-Thayer, Ph.D.

Additional edits by Cheyenne Nelson, B.A., Aleksey Tsukanov, Ph.D.