CryptoDB
Hunter Kippen
ORCID: 0000-0001-6953-0710
Publications
Year
Venue
Title
2023
CRYPTO
Revisiting Security Estimation for LWE with Hints from a Geometric Perspective
Abstract
The Distorted Bounded Distance Decoding Problem (DBDD) was introduced by Dachman-Soled et al. [Crypto ’20] as an intermediate problem between LWE and unique-SVP (uSVP). They presented an approach that reduces an LWE instance to a DBDD instance, integrates side information (or “hints”) into the DBDD instance, and finally reduces it to a uSVP instance, which can be solved via lattice reduction. They showed that this principled approach can lead to algorithms for side-channel attacks that perform better than ad-hoc algorithms that do not rely on lattice reduction.
The current work focuses on new methods for integrating hints into a DBDD instance. We view hints from a geometric perspective, as opposed to the distributional perspective from the prior work. Our approach provides the rigorous promise that, as hints are integrated into the DBDD instance, the correct solution remains a lattice point contained in the
specified ellipsoid.
We instantiate our approach with two new types of hints: (1) Inequality hints, corresponding to the region of intersection of an ellipsoid and a halfspace; (2) Combined hints, corresponding to the region of intersection of two ellipsoids. Since the regions in (1) and (2) are not necessarily ellipsoids, we replace them with ellipsoidal approximations that circumscribe the region of intersection. Perfect hints are reconsidered as the region of intersection of an ellipsoid and a hyperplane, which is itself an ellipsoid. The compatibility of “approximate,” “modular,” and “short vector” hints from the prior work is examined.
We apply our techniques to the decryption failure and side-channel attack settings. We show that “inequality hints” can be used to model decryption failures, and that our new approach yields a geometric analogue of the “failure boosting” technique of D’anvers et al. [ePrint, ’18]. We also show that “combined hints” can be used to fuse information from a decryption failure and a side-channel attack, and provide rigorous guarantees despite the data being non-Gaussian. We provide experimental data for both applications. The code that we have developed to implement the integration of hints and hardness estimates extends the Toolkit
from prior work and has been released publicly.
2021
TCC
BKW Meets Fourier: New Algorithms for LPN with Sparse Parities
📺
Abstract
We consider the Learning Parity with Noise (LPN) problem with a sparse secret, where the secret vector $\mathbf{s}$ of dimension $n$ has Hamming weight at most $k$. We are interested in algorithms with asymptotic improvement in the \emph{exponent} beyond the state of the art.
Prior work in this setting presented algorithms with runtime $n^{c \cdot k}$ for constant $c < 1$, obtaining a constant factor improvement over brute force search, which runs
in time ${n \choose k}$.
We obtain the following results:
- We first consider the \emph{constant} error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM '03] as well as techniques from Fourier analysis for $p$-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work,
when the sparsity $k = k(n) = \frac{n}{\log^{1+ 1/c}(n)}$, where $c \in o(\log \log(n))$ and $c \in \omega(1)$. The runtime and sample complexity of this algorithm are approximately the same.
- We next consider the \emph{low noise} setting, where the error is subconstant. We present a new algorithm in this setting that requires only a \emph{polynomial}
number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = \frac{1}{\eta} \cdot \frac{\log(n)}{\log(f(n))}$ and noise rate of $\eta \neq 1/2$ and $\eta^2 = \left(\frac{\log(n)}{n} \cdot f(n)\right)$, for $f(n) \in \omega(1) \cap n^{o(1)}$. To obtain the improvement in sample complexity, we create subsets of samples using the \emph{design} of Nisan and Wigderson [J.~Comput.~Syst.~Sci. '94], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single $p$-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis.
- Finally, we show that our first algorithm extends to the setting where the noise rate is very high $1/2 - o(1)$, and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size $s$ with approximation factor $\epsilon$ this regime is when $\log \frac{s}{\epsilon} \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $\log \frac{s}{\epsilon} \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$. For Juntas of $k$ the regime is when $k \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $k \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$.
Coauthors
- Dana Dachman-Soled (2)
- Huijing Gong (2)
- Tom Hanson (1)
- Hunter Kippen (2)
- Aria Shahverdi (1)