International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Neekon Vafa

Publications

Year
Venue
Title
2025
EUROCRYPT
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors ($\LWE$) and Alekhnovich Learning Parity with Noise (Alekhnovich $\LPN$), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants. Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich $\LPN$ and $\LWE$. Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question: \begin{center} \emph{In a world where both $\LWE$ and Alekhnovich $\LPN$ are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE?} \end{center} To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like $\LWE$ and Alekhnovich $\LPN$, but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the $\LWE$ and Alekhnovich $\LPN$ assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via $\NP$-completeness reductions.
2025
EUROCRYPT
The Complexity of Memory Checking with Covert Security
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most $n^{1 - \epsilon}$ must make $\Omega(\log n/ \log \log n)$ queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions. We consider and adapt the notion of {\em covert} security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting. We close this gap and prove that $\Omega(\log n/ \log \log n)$ overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural ``read-only reads'' property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
2024
TCC
Sparse Linear Regression and Lattice Problems
Sparse linear regression (SLR) is a well-studied problem in statistics where one is given a design matrix $\mathbf{X} \in \mathbb{R}^{m \times n}$ and a response vector $\mathbf{y} = \mathbf{X} \boldsymbol{\theta}^* + \mathbf{w}$ for a $k$-sparse vector $\boldsymbol{\theta}^*$ (that is, $\|\boldsymbol{\theta}^*\|_0 \leq k$) and small, arbitrary noise $\mathbf{w}$, and the goal is to find a $k$-sparse $\widehat{\boldsymbol{\theta}} \in \mathbb{R}^{n}$ that minimizes the mean squared prediction error $\frac{1}{m} \|\mathbf{X} \widehat{\boldsymbol{\theta}} - \mathbf{X} \boldsymbol{\theta}^*\|^2_2$. While $\ell_1$-relaxation methods such as basis pursuit, Lasso, and the Dantzig selector solve SLR when the design matrix is well-conditioned, no general algorithm is known, nor is there any formal evidence of hardness in an average-case setting with respect to all efficient algorithms. We give evidence of average-case hardness of SLR w.r.t. all efficient algorithms assuming the worst-case hardness of lattice problems. Specifically, we give an instance-by-instance reduction from a variant of the bounded distance decoding (BDD) problem on lattices to SLR, where the condition number of the lattice basis that defines the BDD instance is directly related to the restricted eigenvalue condition of the design matrix, which characterizes some of the classical statistical-computational gaps for sparse linear regression. Also, by appealing to worst-case to average-case reductions from the world of lattices, this shows hardness for a distribution of SLR instances; while the design matrices are ill-conditioned, the resulting SLR instances are in the identifiable regime. Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
2024
TCC
Indistinguishability Obfuscation from Bilinear Maps and LPN Variants
We construct an indistinguishability obfuscation (IO) scheme from the sub-exponential hardness of the decisional linear problem on bilinear maps together with two variants of the learning parity with noise (LPN) problem, namely large-field LPN and (binary-field) sparse LPN. This removes the need to assume the existence of polynomial-stretch PRGs in $\mathsf{NC}^0$ from the state-of-the-art construction of IO (Jain, Lin, and Sahai, EUROCRYPT 2022). As an intermediate step in our construction, we abstract away a notion of structured-seed polynomial-stretch PRGs in $\mathsf{NC}^0$ which is implied by both sparse LPN and the existence of polynomial-stretch PRGs in $\mathsf{NC}^0$. As immediate applications, from the sub-exponential hardness of the decisional linear assumption on bilinear groups, large-field LPN, and sparse LPN, we get alternative constructions of (a) FHE without lattices or circular security assumptions (Canetti, Lin, Tessaro, and Vaikuntanathan, TCC 2015), and (b) perfect zero-knowledge adaptively-sound Succinct Non-interactive Arguments (SNARGs) for NP (Waters and Wu, STOC 2024).
2023
CRYPTO
MacORAMa: Optimal Oblivious RAM with Integrity
Surya Mathialagan Neekon Vafa
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM `96), is a primitive that allows a client to perform RAM computations on an external database without revealing any information through the access pattern. For a database of size $N$, well-known lower bounds show that a multiplicative overhead of $\Omega(\log N)$ in the number of RAM queries is necessary assuming $O(1)$ client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO `21) with $O(\log N)$ worst-case overhead and $O(1)$ client storage. However, this optimal ORAM is known to be secure only in the \emph{honest-but-curious} setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the \emph{malicious} setting, where an adversary is additionally allowed to tamper with the database, this construction and many others in fact become insecure. In this work, we construct the first maliciously secure ORAM with worst-case $O(\log N)$ overhead and $O(1)$ client storage assuming one-way functions, which are also necessary. By the $\Omega(\log N)$ lower bound, our construction is asymptotically optimal. To attain this overhead, we develop techniques to intricately interleave online and offline memory checking for malicious security. Furthermore, we complement our positive result by showing the impossibility of a \emph{generic} overhead-preserving compiler from honest-but-curious to malicious security, barring a breakthrough in memory checking.