Line 84: Line 84:
 
12 * (4, 5) + -9 * (5, 7) = (3, -3)
 
12 * (4, 5) + -9 * (5, 7) = (3, -3)
  
Classically, there is no way to solve this problem in under exponential time and there currently exists no quantum algorithm with a greater than exponential speed up like Shor’s algorithm. Though getting the correct answer in this case was trivial, an example with a lattice matrix of size 10,000 x 10,000 would prove exponentially much more challenging, and actually cannot be solved trivially at the moment.
+
Classically, there is no way to solve this problem in under exponential time and there currently exists no quantum algorithm with a greater than exponential speed up like Shor’s algorithm. Though getting the correct answer in this case was trivial, an example with a lattice matrix of size 10,000 x 10,000 would prove exponentially much more challenging, and actually cannot be solved trivially at the moment. In the future however, new algorithms to solve lattice encryption may be found. It seems to be the ethical choice for researchers to continue work on QM in regards to breaking these cryptographic algorithms, as quantum techniques can also be used for good to create truly unbreakable encryption. Halting research would allow bad actors to move ahead of current infrastructure, opening up new vulnerabilities that aren't ready to be solved.
  
 
Victor:
 
Victor:

Revision as of 20:11, 28 November 2022

Nathan: What is RSA, how does it work, what are its vulnerabilities?

So what exactly is RSA encryption? RSA, or Rivest-Shamir-Adleman, its creators, is "a public-key cryptosystem that allows us to encrypt messages between two communicating parties so that an eavesdropper who overhears the encrypted messages will not be able to decode them" (Cormen). A public-key cryptosystem also allows for a party to send a digital signature along with their message. This signature loses its validity if any bit of the message is changed which provides authentication of both the message and the sender. This makes RSA encryption the perfect tool for authenticating any electronic communications.

Each party in a public-key cryptosystem has both a public key and a secret key.


Keshav:

Within the next few decades, we are unlikely to see quantum computing become viable for anyone besides well-funded governments and organizations. Benefits and risks are unlikely to majorly affect the public, but risks remain for people in power as well as systems like cryptocurrency, which is still a multi trillion dollar industry.

Currently, the record number of q-bits sits at 433 by IBM’s Osperey which costs about $15 million. Applying Moore’s Law for cost, we get 1.5x10^7 / 2^(28 years / 2) ~ $915. This means we should plan to see quantum computing become highly integrated in the next 30 years but will likely see major advancements in the next 15-20.

Taking a look at the speedup that quantum computing would bring to RSA cracking, we turn to the runtime expression for the General Number Field Sieve, the best known classical algorithm for factoring extremely large numbers, which is sub exponential but super polynomial.


GNFS IMAGE HERE***

Currently RSA encryption takes 1024-2048 bit numbers. Assuming the geometrically average case, we get sqrt(1024*2048) ~ 1448 bit number.

Evaluating for 2^1448 we get:

~ 1.2033 * 10^24

Comparing this to shors algorithm with a runtime of:

O(log^3(n)) with log base 2 approximation we get:

~ 3.03 * 10^9.

Comparing these two values, we get an improvement of

~ 3.03 * 10^14 times

Modern QM gates take on the order of 200 nanoseconds to “switch”, while modern silicon transistors take about 100 picoseconds to switch. Taking into consideration this difference we get a real speedup of ~ 1.515 * 10 ^ 11 times.

This still puts RSA encryption as unbreakable in the age of the universe to unbreakable in many thousands of years. However, with the speedup of quantum gates, as well as faster algorithms, it is likely blockchain will face numerous quantum attacks in the near future.

Therefore, much work has been put into creating quantum-resistant cryptography algorithms. Currently, lattice based encryption seems to be the most fruitful.

Lattice Encryption is based on the fact that the simple linear algebra function

Ax=b

Is actually a one way attack resistant function, and

Ax+e=b

Is even more resistant.

Looking at an example problem, we define a lattice L as all integer linear combinations of the basis vectors (4, 5) and (5, 7). That lattice will look like so:

LATTICE IMAGE HERE***

Since the angle between our vectors is small, it is very difficult to go backwards and find the closest points to some vector (x0, y0). This would be the same as finding

Ax~e where A is the concatenation of (4, 5) and (5, 7), and where x and e are column vectors.

Given e = (4, -3)

This means we are solving the linear system of equations 4a+5b=4

5a+7b=-3

Solving for a and b we get

a = 43/3

b = -32/3

Using these values we get the closest possible integer combination of

14 and -11

These map to the point

14 * (4, 5) + -11 * (5, 7) = (56, 70) – (55, 77) = (1, -7)

This is not at all close to (4, -3), and the two possible closest linear combinations are

16 * (4, 5) + -12 * (5, 7) = (4, -4)

12 * (4, 5) + -9 * (5, 7) = (3, -3)

Classically, there is no way to solve this problem in under exponential time and there currently exists no quantum algorithm with a greater than exponential speed up like Shor’s algorithm. Though getting the correct answer in this case was trivial, an example with a lattice matrix of size 10,000 x 10,000 would prove exponentially much more challenging, and actually cannot be solved trivially at the moment. In the future however, new algorithms to solve lattice encryption may be found. It seems to be the ethical choice for researchers to continue work on QM in regards to breaking these cryptographic algorithms, as quantum techniques can also be used for good to create truly unbreakable encryption. Halting research would allow bad actors to move ahead of current infrastructure, opening up new vulnerabilities that aren't ready to be solved.

Victor:

  • Introduction

// TODO

David:

The Quantum Threat to RSA Security: Shor's Factoring Algorithm

Peter Shor's discovery in 1994 of a quantum algorithm that finds the prime factors of an integer in an amount of time that only grows as a polynomial with the size of the integer opened up fundamentally new possibilities for cryptography. Most importantly, it showed it was possible to combine the parallelism inherent in quantum states with useful theorems from number theory to produce a serious threat to the security of systems based on RSA, since RSA assumes that factoring can never be carried out in polynomial time. All known classical factoring algorithms do not break this computational "speed limit," but Shor's algorithm shows that this security assumption is not valid in a world with large, error-correcting quantum computers.

To describe Shor's algorithm, we need three concepts. First, we need to define what quantum states are mathematically. Second, we need to see how quantum states give probabilities for different outcomes via the Born rule. Thirdly, we need to describe how interacting qubits are modeled by a mathematical object known as the tensor product.

Fortunately, we can set aside questions about how to picture quantum states with our intuitions molded by the classical world, and instead give a definition in terms of vector spaces. A quantum state is a vector with complex number entries, and a quantum state space is a vector space spanned by linear combinations of specific vectors chosen to represent mutually exclusive outcomes of an experiment. These vectors are called basis states, since every other member of the larger quantum state space can be expressed as a linear combination of them using complex numbers. So, a quantum state Q can be expressed as Q = z1B1 + z2B2 + . . . + znBn, where {B1, . . . , Bn} is a basis for the state space and z1, . . . , zn are complex numbers.

Many applications involve finite-dimensional quantum state spaces, and Shor's algorithm falls under this category since the quantum portion of it uses qubits. A qubit is the quantum analog of the classical bit formed by linear combinations of the basis states 0 and 1, so a qubit is a 2-dimensional state space. This already suggests that it might be worthwhile to look for ways to process information with a quantum rather than a classical computer since a bit only has 2 possible states, 0 or 1, whereas a qubit has as many possible states as there are complex numbers, so continuously many. These definitions can't be the whole story, since actually interacting with a qubit only ever produces a 0 or a 1 with some probability, never a "fuzzy" version of these possible values. We need the Born rule to bridge the gap between a quantum state and the chances of what actually occurs in the world. The Born rule states that the probability of seeing an outcome associated with a basis state is the product of the complex coefficient of that basis state with its complex conjugate. The complex conjugate of a complex number a + ib is a - ib. Geometrically, this corresponds to reflecting the first complex number over the real axis in the complex plane to reach the second. So, for a qubit QB = z1(0) + z2(1), where z1 = a + ib, the probability of seeing the outcome 1 is (a + ib)(a - ib). (Just as with classical bits, we don't need to end up with a number on a dial; all that's required is that the events named "0" and "1" be mutually exclusive, like a device being turned on or off.)

It might look as if the Born rule comes out of nowhere, but in the context of the vector definition of quantum states, it makes sense for finding probabilities, which can never be negative and have to sum to 1. For the first property, the Born rule makes sure that negative probabilities never arise because multiplying a complex number by its complex conjugate always gives a non-negative real number, which we can see by algebra:

(a + ib)(a - ib) = a2 - iab + iab - i2b2 = a2 - (-1)b2 = a2 + b2.

This leads directly into the second property since the absolute value of a complex number is the length of the vector from the origin to that number in the complex plane: |a + ib| = sqrt(a2 + b2). This means that a2 + b2 = |a + ib|2, and for all of the complex coefficients of the basis states, it must be the case that |z1|2 + |z2|2 + . . . + |zn|2 = 1. This corresponds to the requirement that the probabilities of a set of mutually exclusive and exhaustive events sum to 1. To return to a qubit z1(0) + z2(1), this means that it must be the case that |z1|2 + |z2|2 = 1. This is always possible to set up since we can divide the entries of a quantum state vector by its length to give a vector of length 1.

Now we can describe how the interacting qubits Shor's algorithm uses are modeled mathematically. It's clear that a qubit is a 2-dimensional vector space, but it's less clear how the vector space of multiple qubits should look, since there isn't a straightforward picture for motivation as there is in the classical domain. The use of what are known as tensor products gives the most natural way to describe multiple qubits. A tensor product is a vector space formed from two or more other vector spaces using basis vectors from each one to make new basis vectors for the new space. So, if QB1 and QB2 are qubits, both with the basis {0, 1}, then the combined quantum state space QB1QB2 is a vector space spanned by the four new basis states (00), (01), (10), and (11), so the new vector space is 4-dimensional. For larger systems of n qubits, the dimension of their tensor product becomes 2n. This exponential increase in the dimension of the vector space of interacting qubits corresponds to how much more information quantum systems can store compared to classical ones, and it's the main source of the speedup Shor's algorithm offers over known classical factoring algorithms.

Here are APA citations for several possible sources. We don't need to use all of them but any of them could be useful, especially for comparing the quantum situation with classical and post-quantum. Manin & Zilber includes discussion of both quantum logic as a broader topic, and the character of Shor's quantum algorithm for prime factorization and how it fundamentally changed the security situation for public-key systems that rely on factoring being inefficient. Aaronson has a lot of interesting and more intuitive discussions about both theory and practical problems like quantum computers and the potential dangers they pose, and separating hype from what could feasibly happen. Garey & Johnson has a long list of NP-complete problems across different areas of math and computer science that could be interesting for comparison across topics. Cormen et al, could be a good reference for parts of the classical domain, especially RSA and related public-key systems.

Aaronson, S. (2013). Quantum computing since Democritus. Cambridge University Press.

Beltrametti, E. G. & Cassinelli, G. (1981). The logic of quantum mechanics (Peter A. Carruthers, Ed.). Addison-Wesley

    Publishing Company. 

Bengtsson, I. & Życzkowski, K. (2017). Geometry of quantum states: An introduction to quantum entanglement (2nd ed.).

    Cambridge University Press.

Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms (3rd ed.). MIT Press.

Garey, R. G. & Johnson, D. S. (1979). Computers and intractability: A guide to the theory of NP-completeness. W. H. Freeman

    and Company.

Hughes, R. I. G. (1989). The structure and interpretation of quantum mechanics. Harvard University Press.

Jordan, T. F. (1986). Quantum mechanics in simple matrix form. Dover Publications.

Kitaev, A. Y., Shen, A. H., & Vyalyi, M. N. (2002). Classical and quantum computation. American Mathematical Society.

Loepp, S. & Wootters, W. K. (2006). Protecting information: From classical error correction to quantum cryptography.

    Cambridge University Press.

Manin, Y. I. & Zilber, B. (2010). A course in mathematical logic for mathematicians (2nd ed.). Springer.

Peres, A. (2002). Quantum theory: Concepts and methods. Kluwer Academic Publishers.

Pierce, J. R. (1980). An introduction to information theory: Symbols, signals and noise (2nd ed.). Dover

    Publications.

Sipser, M. (2013). Introduction to the theory of computation (3rd ed.). Cengage Learning.

von Neumann, J. (1955). Mathematical foundations of quantum mechanics. (R. T. Beyer, Trans.). Princeton University Press.

- Victor: sources summary

ir.inflibnet.ac.in:8080/ir/bitstream/1944/212/3/03cali_43.pdf https://www.geeksforgeeks.org/classical-cryptography-and-quantum-cryptography/amp/ https://thetechbrook.com/quantum-ethics/ https://link.springer.com/content/pdf/10.1007/s10676-017-9439-z.pdf https://onlinelibrary.wiley.com/doi/epdf/10.1002/9781119795667.ch2 https://www.quantropi.com/differences-between-classical-quantum-post-quantum-cryptography/#:~:text=Classical%20cryptography%20uses%20difficult%20mathematical,and%20can%20withstand%20quantum%20attacks. https://medium.com/edge-elections/cryptography-conventional-quantum-post-quantum-9f7f4ec84732 https://link.springer.com/chapter/10.1007/978-3-540-88702-7_1 https://arxiv.org/pdf/1804.00200.pdf

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn