Learning With Errors: Lattice Methods
Let's explore linear regression and lattice methods.
Join the DZone community and get the full member experience.
Join For FreeLinear Regression
Suppose you have a linear regression with a couple of predictors and no intercept term:
β_{1}x_{1} + β_{2}x_{2} = y + ε
where the x's are inputs, the β are fixed but unknown, y is the output, and ε is random error.
Given n observations (x_{1}, x_{2}, y + ε), linear regression estimates the parameters β_{1} and β_{2}.
I haven't said, but I implicitly assumed all the above numbers are real. Of course they're real. It would be strange if they weren't!
Learning With Errors
Well, we're about to do something strange. We're going to pick a prime number p and do our calculations modulo p except for the addition of the error ε. Our inputs ( x_{1}, x_{2}) are going to be pairs of integers. Someone is going to compute:
r = β_{1}x_{1} + β_{2}x_{2} mod p
where β _{1} and β _{2} are secret integers. Then they're going to tell us:
r/p + ε
where ε is a random variable on the interval [0, 1]. We give them n pairs ( x_{1}, x_{2}) and they give back n values of r/ p with noise added. Our job is to infer the βs.
This problem is called learning with errors or LWE. It's like linear regression, but much harder when the problem size is bigger. Instead of just two inputs, we could have m of inputs with m secret coefficients where m is large. Depending on the number of variables m, the number of equations n, the modulus p, and the probability distribution on ε, the problem may be possible to solve but computationally very difficult.
Why is it so difficult? Working mod p is discontinuous. A little bit of error might completely change our estimation of the solution. If n is large enough, we could recover the coefficients anyway, using something like least squares. But how would we carry that out? If m and p are small, we can just try all p possibilities, but that's not going to be practical if m and p are large.
In linear regression, we assume there is some (approximately) linear process out in the real world that we're allowed to reserve with limited accuracy. Nobody's playing a game with us, that just how data come to us. But with LWE, we are playing a game that someone has designed to be hard. Why? For cryptography. In particular, quantumresistant cryptography.
Post Quantum Cryptography
Variations on LWE are the basis for several proposed encryption algorithms that are believed to be secure even if an adversary has access to a quantum computer.
The public key encryption systems in common use today would all be breakable if quantum computing becomes practical. They depend on mathematical problems like factoring and discrete logarithms being computationally difficult, which they appear to be with traditional computing resources. But we know that these problems could be solved in polynomial time on a quantum computer with Shor's algorithm. But LWE is a hard problem, even on a quantum computer. Or so we suspect.
The US government's National Institute of Standards and Technology (NIST) is holding a competition to identify quantumresistant encryption algorithms. Last month, they announced 26 algorithms that made it to the second round. Many of these algorithms depend on LWE or variations.
One variation is LWR (learning with rounding) which uses rounding rather than adding random noise. There are also ringbased counterparts RLWE and RLWR which add random errors and use rounding respectively. And there are polynomial variations such as polyLWE which uses a polynomialbased learning with errors problem. The general category for these methods is lattice methods.
Lattice Methods
Of the publickey algorithms that made it to the second round of the NIST competition, 9 out of 17 use latticebased cryptography:
 CRYSTALSKYBER
 FrodoKEM
 LAC
 NewHope
 NTRU
 NTRU Prime
 Round5
 SABER
 Three Bears
Also, two of the nine digital signature algorithms are based on lattice problems:
 CRYSTALSDILITHIUM
 FALCON
Based purely on the names, and not on the merits of the algorithms, I hope the winner is one of the methods with a science fiction allusion in the name.
Published at DZone with permission of John Cook, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending

Knowing and Valuing Apache Kafka’s ISR (InSync Replicas)

What Is Envoy Proxy?

What to Pay Attention to as Automation Upends the Developer Experience

FrontEnd: Cache Strategies You Should Know
Comments