CryptoDB
Mingqi Lu
Publications
Year
Venue
Title
2024
CRYPTO
Memory-Sample Lower Bounds for LWE
Abstract
In this work, we show memory-sample lower bounds for the fundamental problem of learning with error (LWE). In this problem, a learner tries to learn a uniformly sampled $x \sim \Z_q^n$ from a stream of inner products plus errors sampled from discrete Gaussians of scale $\sigma$. Any learning algorithm requires either at least $\Omega(k^2 / \log(q / \sigma))$ bits of memory, or at least $2^{\Omega(k)}$ many samples, where $k = \Omega(n \log(1 / (1 - \phi(q)/q)))$. This matches with the information-theoretic upper bound when $q$ is prime.
In addition to LWE, our bounds apply to a wide range of learning problems. Following the work of Garg, Raz, Tal [STOC 2018], we describe a learning problem by a learning matrix $M \colon A \times X \to \{0, 1, \cdots, q-1\}$ together with an error matrix $\vare$. The learner tries to learn $x \sim X$ from a stream of samples, $(a_1, b_1), \cdots, (a_m, b_m)$, where for every $i$, $a_i \sim A$, and $b_i \leftarrow t$ with probability $\vare_{M(a,x),t}$.
We characterize the learning problems that can have memory-sample lower bounds as ``$q$-balanced'', which is a generalization of the $L2$-extractor in [GRT18]. We also show a reduction from $q$-balanced to $L2$-extractor, which provides a general way to prove $q$-balanced and thus apply our bounds.
Our proof builds on [GRT18] and the work of Garg, Kothari, Liu, Raz [APPROX/RANDOM 2021].
Coauthors
- Mingqi Lu (1)
- Junzhao Yang (1)