CryptoDB
Aayush Jain
Publications
Year
Venue
Title
2024
CRYPTO
Non-Interactive Zero-Knowledge from LPN and MQ
Abstract
We give the first construction of non-interactive zero-knowledge (NIZK) arguments from post-quantum assumptions other than Learning with Errors. In particular, we achieve NIZK under the polynomial hardness of the Learning Parity with Noise (LPN) assumption, and the exponential hardness of solving random underdetermined multivariate quadratic equations (MQ). We also construct NIZK satisfying statistical zero-knowledge assuming a new variant of LPN, Dense-Sparse LPN, introduced by Dao and Jain (ePrint 2024), together with exponentially-hard MQ.
The main technical ingredient of our construction is an extremely natural (but only in hindsight!) construction of correlation-intractable (CI) hash functions from MQ, for a NIZK-friendly sub-class of constant-degree polynomials that we call concatenated constant-degree polynomials. Under exponential security, this hash function also satisfies the stronger notion of approximate CI for concatenated constant-degree polynomials. The NIZK construction then follows from a prior blueprint of Brakerski-Koppula-Mour (CRYPTO 2020). In addition, we also show how to construct (approximate) CI hashing for degree-$d$ polynomials from the (exponential) hardness of solving random degree-$d$ equations, a natural generalization of MQ. To realize NIZK with statistical zero-knowledge, we design a lossy public-key encryption scheme with approximate linear decryption and inverse-polynomial decryption error from Dense-Sparse LPN. These constructions may be of independent interest.
Our work therefore gives a new way to leverage MQ with uniformly random equations, which has found little cryptographic applications to date. Indeed, most applications in the context of encryption and signature schemes make use of structured variants of MQ, where the polynomials are not truly random but posses a hidden planted structure. We believe that the MQ assumption may plausibly find future use in the designing other advanced proof systems.
2024
CRYPTO
Lossy Cryptography from Code-Based Assumptions
Abstract
Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class $\mathcal{SZK}$ (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in $\mathcal{BPP}^{\mathcal{SZK}}$.
This poses a barrier for building advanced cryptography from code-based assumptions such as Learning Parity with Noise (LPN), as LPN is only known to be in $\mathcal{BPP}^{\mathcal{SZK}}$ under an extremely low noise rate $\frac{\log^2 n}{n}$, for which it is broken in quasi-polynomial time.
In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class $\mathcal{BPP}^{\mathcal{SZK}}$ and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece's cryptosystem and random $k\mbox{-}$XOR in average-case complexity. Roughly, the assumption states that
\[(\mathbf{T}\, \mathbf{M}, \mathbf{s} \,\mathbf{T}\, \mathbf{M} + \mathbf{e}) \quad \text{is indistinguishable from}\quad (\mathbf{T} \,\mathbf{M}, \mathbf{u}),\]
for a random (dense) matrix $\mathbf{T}$, random sparse matrix $\mathbf{M}$, and sparse noise vector $\mathbf{e}$ drawn from the Bernoulli distribution with inverse polynomial noise probability.
We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate $\frac{\log^2 n}{n}$ that is only quasi-polynomially secure.
2024
CRYPTO
A Systematic Study of Sparse LWE
Abstract
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of $(\mathbf{A}, \mathbf{s}\mathbf{A}+\mathbf{e} \mod p)$ from $(\mathbf{A}, \mathbf{u})$ for a random $\mathbf{u}$ where the secret $\mathbf{s}$, and the error vector $\mathbf{e}$ is generated exactly as in LWE. However, exactly like sparse LPN the coefficient matrix $\mathbf{A}$ is chosen randomly from $\mathbb{Z}^{n\times m}_{p}$ so that each column has Hamming weight exactly $k$ for some small $k$. We study the problem in the regime where $k$ is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly $O(n/k)$ factor improvement in efficiency. Our results can be summarized as:
\begin{itemize}
\item {\bf Foundations:} We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension $k$ implies the hardness of sparse LWE/LPN with sparsity $k$ and arbitrary dimension $n \geq k$. 2) When the number of samples $m=\Omega(n \log p)$, length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension $n \times m = O(n \log p)$. 4) Efficient algorithms for sampling such matrices together with trapdoors when the dimension is $n \times m = \tilde{O}(n^2)$.
\item {\bf Cryptanalysis:} We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest $k$ and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
\item {\bf Applications:} We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with {\em slightly super-constant}, or even {\em constant}, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
\end{itemize}
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
2023
EUROCRYPT
Maliciously-Secure MrNISC in the Plain Model
Abstract
We study strong versions of round-optimal MPC.
A recent work of Benhamouda and Lin (TCC '20)
identified a version of secure multiparty computation (MPC), termed
Multiparty reusable Non-Interactive Secure Computation (MrNISC),
that combines at the same time several fundamental aspects of secure
computation with standard simulation security into one primitive:
round-optimality, succinctness, concurrency, and adaptivity. In more
detail, MrNISC is essentially a two-round MPC protocol where the first
round of messages serves as a reusable commitment to the private inputs
of participating parties. Using these commitments, any subset of parties
can later compute any function of their choice on their respective inputs
by broadcasting one message each. Anyone who sees these parties'
commitments and evaluation messages (even an outside observer) can learn
the function output and nothing else. Importantly, the input commitments
can be computed without knowing anything about other participating
parties (neither their identities nor their number) and they are reusable
across any number of computations.
By now, there are several known MrNISC protocols from either (bilinear)
group-based assumptions or from LWE. They all satisfy semi-malicious
security (in the plain model) and require trusted setup assumptions in
order to get malicious security. We are interested in maliciously secure
MrNISC protocols in the plain model, without trusted setup. Since
the standard notion of polynomial simulation is un-achievable in less
than four rounds, we focus on security with \emph{super-polynomial}-time
simulation (SPS).
Our main result is the first maliciously secure SPS MrNISC in the plain
model. The result is obtained by generically compiling any
semi-malicious MrNISC and the security of our compiler relies on several
well-studied assumptions of an indistinguishability obfuscator, DDH over Z^*_p and asymmetric pairing groups,
and a time-lock puzzle (all of which need to be sub-exponentially hard).
As a special case, we obtain the first 2-round maliciously secure SPS
MPC based on well-founded assumptions. This MPC is also concurrently
self-composable and its first message is short (i.e., its size is
independent of the number of the participating parties) and reusable
throughout any number of computations. Prior to our work, for two round maliciously secure MPC, neither concurrent MPC nor reusable MPC nor MPC with first message independent in the number of parties was known from any set of assumptions. Of independent interest is one of our building blocks: the first construction of a one-round non-malleable commitment scheme from well-studied assumptions, avoiding keyless hash functions and non-standard hardness amplification assumptions.
2023
EUROCRYPT
Polynomial-Time Cryptanalysis of the Subspace Flooding Assumption for Post-Quantum iO
Abstract
Indistinguishability Obfuscation (iO) is a highly versatile primitive implying a myriad advanced cryptographic applications. Up until recently, the state of feasibility of iO was unclear, which changed with works (Jain-Lin-Sahai STOC 2021, Jain-Lin-Sahai Eurocrypt 2022) showing that iO can be finally based upon well-studied hardness assumptions. Unfortunately, one of these assumptions is broken in quantum polynomial time.
Luckily, the line work of Brakerski et al. Eurocrypt 2020, Gay-Pass STOC 2021, Wichs-Wee Eurocrypt 2021, Brakerski et al. ePrint 2021, Devadas et al. TCC 2021 simultaneously created new pathways to construct iO with plausible post-quantum security from new assumptions, namely a new form of circular security of LWE in the presence of leakages.
At the same time, effective cryptanalysis of this line of work has also begun to emerge (Hopkins et al. Crypto 2021).
It is important to identify the simplest possible conjectures that yield post-quantum iO and can be understood through known cryptanalytic tools. In that spirit, and in light of the cryptanalysis of Hopkins et al., recently Devadas et al. gave an elegant construction of iO from a fully-specified and simple-to-state assumption along with a thorough initial cryptanalysis.
Our work gives a polynomial-time distinguisher on their "final assumption" for their scheme. Our algorithm is extremely simple to describe: Solve a carefully designed linear system arising out of the assumption. The argument of correctness of our algorithm, however, is nontrivial.
We also analyze the "T-sum" version of the same assumption described by Devadas et. al. and under a reasonable conjecture rule out the assumption for any value of T that implies iO.
2023
EUROCRYPT
On the Optimal Succinctness and Efficiency of Functional Encryption and Attribute-Based Encryption
Abstract
We investigate the best-possible (asymptotic) efficiency of functional encryption (FE) and attribute-based encryption (ABE) by proving inherent space-time trade-offs and constructing nearly optimal schemes. We consider the general notion of partially hiding functional encryption (PHFE), capturing both FE and ABE, and the most efficient computation model of random-access machine (RAM). In PHFE, a secret key sk_f is associated with a function f, whereas a ciphertext ct_x(y) is tied to a public input x and encrypts a private input y. Decryption reveals f(x,y) and nothing else about y.
We present the first PHFE for RAM solely based on the necessary assumption of FE for circuits. Significantly improving upon the efficiency of prior schemes, our construction achieves nearly optimal succinctness and computation time:
- Its secret key sk_f is of *constant size* (optimal), independent of the function description length |f|, i.e., |sk_f| = poly(lambda).
- Its ciphertext ct_x(y) is *rate-2* in the private input length |y| (nearly optimal) and *independent* of the public input length |x| (optimal), i.e., |ct_x(y)| = 2|y| + poly(lambda).
- Decryption time is *linear* in the *instance* running time T of the RAM computation, plus the function and public/private input lengths, i.e., T_Dec = (T + |f| + |x| + |y|)poly(lambda).
As a corollary, we obtain the first ABE with both keys and ciphertexts being constant-size, while enjoying the best-possible decryption time matching the lower bound by Luo [ePrint '22]. We also separately achieve several other optimal ABE subject to the known lower bound.
We study the barriers to further efficiency improvements. We prove the first unconditional space-time trade-offs for (PH-)FE:
- *No* secure (PH-)FE can have |sk_f| and T_Dec *both* sublinear in |f|.
- *No* secure PHFE can have |ct_x(y)| and T_Dec *both* sublinear in |x|.
Our lower bounds apply even to the weakest secret-key 1-key 1-ciphertext selective schemes. Furthermore, we show a conditional barrier towards the optimal decryption time T_Dec = T poly(lambda) while keeping linear size dependency --- any such (PH-)FE scheme implies doubly efficient private information retrieval (DE-PIR) with linear-size preprocessed database, for which so far there is no candidate.
2023
CRYPTO
Computational Wiretap Coding from Indistinguishability Obfuscation
Abstract
A wiretap coding scheme for a pair of noisy channels $(\chB,\chE)$ enables Alice to reliably communicate a message to Bob by sending its encoding over $\chB$, while hiding the message from an adversary Eve who obtains the same encoding over $\chE$.
A necessary condition for the feasibility of writeup coding is that $\chB$ is not a {\em degradation} of $\chE$, namely Eve cannot simulate Bob’s view. While insufficient in the information-theoretic setting, a recent work of Ishai, Korb, Lou, and Sahai (Crypto 2022) showed that the non-degradation condition {\em is} sufficient in the computational setting, assuming idealized flavors of obfuscation. The question of basing a similar feasibility result on standard cryptographic assumptions was left open, even in simple special cases.
In this work, we settle the question for all discrete memoryless channels where the (common) input alphabet of $\chB$ and $\chE$ is {\em binary}, and with arbitrary finite output alphabet, under the standard assumptions that indistinguishability obfuscation and injective PRGs exist. In particular, this establishes the feasibility of computational wiretap coding when $\chB$ is a binary symmetric channel with crossover probability $p$ and $\chE$ is a binary erasure channel with erasure probability $e$, where $e>2p$.
On the information-theoretic side, our result builds on a new polytope characterization of channel degradation for pairs of binary-input channels, which may be of independent interest.
2023
CRYPTO
Multi-Party Homomorphic Secret Sharing and Sublinear MPC from Sparse LPN
Abstract
Over the past few years, we have seen the powerful emergence of homomorphic secret sharing (HSS) as a compelling alternative to fully homomorphic encryption (FHE), due to its efficiency benefits and its feasibility from an array of standard assumptions. However, all previously known HSS schemes, with the exception of schemes built from FHE or indistinguishability obfuscation (iO), can only support two parties.
In this work, we give the first construction of a \emph{multi-party} HSS scheme for a non-trivial function class, from an assumption not known to imply FHE. In particular, we construct an HSS scheme for an \emph{arbitrary} number of parties with an \emph{arbitrary} corruption threshold, supporting evaluations of $\log / \log \log$-degree polynomials, containing a polynomial number of monomials, over arbitrary finite fields. As a consequence, we obtain an MPC protocol for any number of parties, with (slightly) \emph{sub-linear} communication per party of roughly $O(S / \log \log S)$ bits when evaluating a layered Boolean circuit of size $S$.
Our HSS scheme relies on the \emph{sparse} Learning Parity with Noise (LPN) assumption, a standard variant of LPN with a sparse public matrix that has been studied and used in prior works. Thanks to this assumption, our construction enjoys several unique benefits. In particular, it can be built on top of \emph{any} linear secret sharing scheme, producing noisy output shares that can be error-corrected by the decoder. This yields HSS for low-degree polynomials with optimal download rate. Unlike prior works, our scheme also has a low computation overhead in that the per-party computation of a constant degree polynomial takes $O(M)$ work, where $M$ is the number of monomials.
2023
CRYPTO
The Pseudorandom Oracle Model and Ideal Obfuscation
Abstract
We introduce a new idealized model of hash functions, which we refer to as the *pseudorandom oracle* (PrO) model. Intuitively, it allows us to model cryptosystems that use the code of an ideal hash function in a non-black-box way. Formally, we model hash functions via a combination of a pseudorandom function (PRF) family and an ideal oracle. A user can initialize the hash function by choosing a PRF key $k$ and mapping it to a public handle $h$ using the oracle. Given the handle $h$ and some input $x$, the oracle can also be called to evaluate the PRF at $x$ with the corresponding key $k$. A user who chooses the PRF key $k$ therefore has a complete description of the hash function and can use its code in non-black-box constructions, while an adversary, who just gets the handle $h$, only has black-box access to the hash function via the oracle.
As our main result, we show how to construct ideal obfuscation in the PrO model, starting from functional encryption (FE), which in turn can be based on well-studied polynomial hardness assumptions. In contrast, we know that ideal obfuscation cannot be instantiated in the basic random oracle model under any assumptions. We believe our result provides heuristic justification for the following: (1) most natural security goals implied by ideal obfuscation can be achieved in the real world; (2) obfuscation can be constructed from FE at polynomial security loss.
We also discuss how to interpret our result in the PrO model as a construction of ideal obfuscation using simple hardware tokens or as a way to bootstrap ideal obfuscation for PRFs to that for all functions.
2022
EUROCRYPT
Indistinguishability Obfuscation from LPN over F_p, DLIN, and PRGs in NC^0
📺
Abstract
In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation ($\iO$). We prove:
{\bf Theorem}(Informal): {\em Assume sub-exponential security of the following assumptions:
- the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{F}_p$ with
polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant;
- the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with
stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant;
- the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order.
Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits.
This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption.
Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE), that essentially can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-studied assumptions.
2021
EUROCRYPT
Multi-Party Reusable Non-Interactive Secure Computation from LWE
📺
Abstract
Motivated by the goal of designing versatile and flexible secure computation protocols that at the same time require as little interaction as possible, we present new multiparty reusable Non-Interactive Secure Computation (mrNISC) protocols. This notion, recently introduced by Benhamouda and Lin (TCC 2020), is essentially two-round Multi-Party Computation (MPC) protocols where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by just sending a single message to a stateless evaluator, conveying the result of the computation but nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of desired computations.
We give a construction of mrNISC that achieves standard simulation security, as classical multi-round MPC protocols achieve. Our construction relies on the Learning With Errors (LWE) assumption with polynomial modulus, and on the existence of a pseudorandom function (PRF) in $\mathsf{NC}^1$. We achieve semi-malicious security in the plain model and malicious security by further relying on trusted setup (which is unavoidable for mrNISC). In comparison, the only previously known constructions of mrNISC were either using bilinear maps or using strong primitives such as program obfuscation.
We use our mrNISC to obtain new Multi-Key FHE (MKFHE) schemes with threshold decryption:
- In the CRS model, we obtain threshold MKFHE for $\mathsf{NC}^1$ based on LWE with only {\em polynomial} modulus and PRFs in $\mathsf{NC}^1$, whereas all previous constructions rely on LWE with super-polynomial modulus-to-noise ratio.
- In the plain model, we obtain threshold levelled MKFHE for $\mathsf{P}$ based on LWE with {\em polynomial} modulus, PRF in $\mathsf{NC}^1$, and NTRU, and another scheme for constant number of parties from LWE with sub-exponential modulus-to-noise ratio. The only known prior construction of threshold MKFHE (Ananth et al., TCC 2020) in the plain model restricts the set of parties who can compute together at the onset.
2021
EUROCRYPT
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification
📺
Abstract
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation ($i\mathcal{O}$), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions.
New Assumption: Previous to our work, all constructions of $i\mathcal{O}$ from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects.
New Techniques:
we introduce a number of new techniques:
- We show how to build partially-hiding public-key functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic $\mathsf{NC}^1$ functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups.
- We construct single-ciphertext secret-key functional encryption for all circuits with {\em linear} key generation, assuming only the LWE assumption.
Simplification: Unlike prior works, our new techniques furthermore let us construct public-key functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the polynomial hardness of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext grows only sublinearly in the output size of the circuit and not its size. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve $i\mathcal{O}$.
2021
CRYPTO
Counterexamples to New Circular Security Assumptions Underlying iO
📺
Abstract
We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-D\"ottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-D\"ottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020).
We provide explicit counterexamples to the {\em $2$-circular shielded randomness leakage} assumption w.r.t.\ the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the {\em homomorphic pseudorandom LWE samples} conjecture proposed by Wee-Wichs.
Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing.
Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved.
2020
EUROCRYPT
Statistical ZAP Arguments
📺
Abstract
Dwork and Naor (FOCS'00) first introduced and constructed two message public coin witness indistinguishable proofs (ZAPs) for NP based on trapdoor permutations. Since then, ZAPs have also been obtained based on the decisional linear assumption on bilinear maps, and indistinguishability obfuscation, and have proven extremely useful in the design of several cryptographic primitives.
However, all known constructions of two-message public coin (or even publicly verifiable) proof systems only guarantee witness indistinguishability against computationally bounded verifiers.
In this paper, we construct the first public coin two message witness indistinguishable (WI) arguments for NP with {\em statistical} privacy, assuming quasi-polynomial hardness of the learning with errors (LWE) assumption. We also show that the same protocol has a super-polynomial simulator (SPS), which yields the first public-coin SPS statistical zero knowledge argument.
Prior to this, there were no known constructions of two-message publicly verifiable WI protocols under lattice assumptions, even satisfying the weaker notion of computational witness indistinguishability.
2020
EUROCRYPT
Combiners for Functional Encryption, Unconditionally
📺
Abstract
Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist.
There have been a series of works Ananth et. al. (CRYPTO '16), Ananth-Jain-Sahai (EUROCRYPT '17), Ananth et. al (TCC '19) on constructing FE combiners from various assumptions.
We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions.
2020
CRYPTO
Amplifying the Security of Functional Encryption, Unconditionally
📺
Abstract
Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption. We show two main results:
- For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally.
- For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against subexponential sized adversaries to a subexponentially secure FE scheme for P/poly, unconditionally.
Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE.
Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions.
2020
ASIACRYPT
Secure MPC: Laziness Leads to GOD
📺
Abstract
Motivated by what we call "honest but lazy” parties in the context of secure multi party computation, we revisit the notion of multi-key FHE schemes (MFHE). In MFHE, any message encrypted using a public key pk_i can be "expanded" so that the resulting ciphertext is encrypted with respect to a set of public keys (pk_1,..,pk_n). Such expanded ciphertexts can be homomorphically evaluated with respect to any circuit to generate a ciphertext ct. Then, this ciphertext ct can be partially decrypted using a secret key sk_i (corresponding to the public key pk_i) to produce a partial decryption p_i. Finally, these partial decryptions {p_{i}}_{i in [n]} can be combined to recover the output. However, this definition of MFHE works only for n-out-of-n access structures and, thus, each node in the system is a point of failure. In the context of "honest but lazy” parties, it is necessary to be able to decrypt even when only given a subset of partial decryptions (say t out of n). In order to solve this problem, we introduce a new notion of multi-key FHE designed to handle arbitrary access patterns that can reconstruct the output. We call it a threshold multi-key FHE scheme (TMFHE).
Our main contributions are the following:
* We formally define and construct TMFHE for any access structure given by a monotone boolean formula, assuming LWE.
* We construct the first simulation-extractable multi-string NIZK from polynomially hard LWE.
* We use TMFHE and our multi-string NIZK to obtain the first round-optimal (three round) MPC protocol in the plain model with guaranteed output delivery secure against malicious adversaries or, more generally, mixed adversaries (which supports "honest but lazy” parties), assuming LWE.
* Our MPC protocols simultaneously achieve security against the maximum number of corruptions under which guaranteed output delivery is achievable, depth-proportional communication complexity, and reusability.
2019
EUROCRYPT
Sum-of-Squares Meets Program Obfuscation, Revisited
Abstract
We develop attacks on the security of variants of pseudo-random generators computed by quadratic polynomials. In particular we give a general condition for breaking the one-way property of mappings where every output is a quadratic polynomial (over the reals) of the input. As a corollary, we break the degree-2 candidates for security assumptions recently proposed for constructing indistinguishability obfuscation by Ananth, Jain and Sahai (ePrint 2018) and Agrawal (ePrint 2018). We present conjectures that would imply our attacks extend to a wider variety of instances, and in particular offer experimental evidence that they break assumption of Lin-Matt (ePrint 2018).Our algorithms use semidefinite programming, and in particular, results on low-rank recovery (Recht, Fazel, Parrilo 2007) and matrix completion (Gross 2009).
2019
EUROCRYPT
How to Leverage Hardness of Constant-Degree Expanding Polynomials over $\mathbb {R}$R to build $i\mathcal {O}$iO
Abstract
In this work, we introduce and construct D-restricted Functional Encryption (FE) for any constant $$D \ge 3$$D≥3, based only on the SXDH assumption over bilinear groups. This generalizes the notion of 3-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model.A $$D=(d+2)$$D=(d+2)-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $$M=(\varvec{x},\varvec{y},\varvec{z})$$M=(x,y,z). Here, $$\varvec{x}\in \mathbb {F}_{\mathbf {p}}^{d\times n}$$x∈Fpd×n and $$\varvec{y},\varvec{z}\in \mathbb {F}_{\mathbf {p}}^n$$y,z∈Fpn. Function keys can be issued for a function $$f=\varSigma _{\varvec{I}= (i_1,..,i_d,j,k)}\ c_{\varvec{I}}\cdot \varvec{x}[1,i_1] \cdots \varvec{x}[d,i_d] \cdot \varvec{y}[j]\cdot \varvec{z}[k]$$f=ΣI=(i1,..,id,j,k)cI·x[1,i1]⋯x[d,id]·y[j]·z[k] where the coefficients $$c_{\varvec{I}}\in \mathbb {F}_{\mathbf {p}}$$cI∈Fp. Knowing the function key and the ciphertext, one can learn $$f(\varvec{x},\varvec{y},\varvec{z})$$f(x,y,z), if this value is bounded in absolute value by some polynomial in the security parameter and n. The security requirement is that the ciphertext hides $$\varvec{y}$$y and $$\varvec{z}$$z, although it is not required to hide $$\varvec{x}$$x. Thus $$\varvec{x}$$x can be seen as a public attribute.D-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $$\mathbb {R}$$R. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation ($$i\mathcal {O}$$iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $$\mathbb {R}$$R.
2019
CRYPTO
Simultaneous Amplification: The Case of Non-interactive Zero-Knowledge
📺
Abstract
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge NIZK candidate satisfying $$\delta _s+\delta _z=1-\epsilon $$, for any constant $$\epsilon >0$$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $$\mathsf {NP}$$ instances. We believe that this may be of independent interest.To achieve this result we analyze following two transformations:Parallel Repetition: We show that using parallel repetition any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$\delta ^n_s-$$sound and $$1-(1-\delta _{z})^n-$$zero-knowledge candidate. Here n is the repetition parameter.MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$1-(1-\delta _s)^n-$$sound and $$2\cdot \delta ^n_{z}-$$zero-knowledge candidate.
Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any $$\mathsf {NIZK}$$ candidate satisfying $$\delta _s,\delta _z<1/3 -1/\mathsf {poly}(\lambda )$$ to a secure one.
2019
CRYPTO
Indistinguishability Obfuscation Without Multilinear Maps: New Paradigms via Low Degree Weak Pseudorandomness and Security Amplification
📺
Abstract
The existence of secure indistinguishability obfuscators (
$$i\mathcal {O}$$
) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing
$$i\mathcal {O}$$
rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for
$$d>2$$
is poorly understood.We propose a new approach to constructing
$$i\mathcal {O}$$
for general circuits. Unlike all previously known realizations of
$$i\mathcal {O}$$
, we avoid the use of d-linear maps of degree
$$d \ge 3$$
.At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator (
$$\varDelta $$
RG) and pseudo flawed-smudging generator (
$$\mathrm {PFG}$$
), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over
$$\mathbb {Z}$$
. We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain
$$i\mathcal {O}$$
for general circuits assuming:Subexponentially secure LWEBilinear Maps
$$\mathrm {poly}(\lambda )$$
-secure 3-block-local PRGs
$$\varDelta $$
RGs or
$$\mathrm {PFG}$$
s
2019
TCC
From FE Combiners to Secure MPC and Back
Abstract
Cryptographic combiners allow one to combine many candidates for a cryptographic primitive, possibly based on different computational assumptions, into another candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. While the original motivation of cryptographic combiners was to reduce trust on existing candidates, in this work, we study a rather surprising implication of combiners to constructing secure multiparty computation protocols. Specifically, we initiate the study of functional encryption combiners and show its connection to secure multiparty computation.Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure.Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results.
A two-round semi-honest MPC protocol in the plain model secure against up to $$n-1$$ corruptions with communication complexity proportional only to the depth of the circuit being computed assuming learning with errors (LWE). Prior two round protocols based on standard assumptions that achieved this communication complexity required trust assumptions, namely, a common reference string.A functional encryption combiner based on pseudorandom generators (PRGs) in $$\mathsf {NC}^1$$. This is a weak assumption as such PRGs are implied by many concrete intractability problems commonly used in cryptography, such as ones related to factoring, discrete logarithm, and lattice problems [11]. Previous constructions of FE combiners, implicit in [7], were known only from LWE. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in $$\mathsf {NC}^1$$.
2018
CRYPTO
Threshold Cryptosystems from Threshold Fully Homomorphic Encryption
📺
Abstract
We develop a general approach to adding a threshold functionality to a large class of (non-threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (ThFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our ThFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE.
2017
EUROCRYPT
2016
CRYPTO
Program Committees
- TCC 2024
- TCC 2023
Coauthors
- Prabhanjan Ananth (4)
- Saikrishna Badrinarayanan (4)
- Boaz Barak (1)
- Fabrice Benhamouda (1)
- Dan Boneh (1)
- Quang Dao (3)
- Rex Fernando (2)
- Romain Gay (1)
- Rosario Gennaro (1)
- Steven Goldfeder (1)
- Vipul Goyal (3)
- Samuel B. Hopkins (2)
- Yuval Ishai (2)
- Paul Lou (2)
- Aayush Jain (27)
- Zhengzhong Jin (1)
- Dakshita Khurana (1)
- Sam Kim (1)
- Ilan Komargodski (2)
- Alexis Korb (1)
- Pravesh Kothari (1)
- Huijia Lin (11)
- Ji Luo (2)
- Nathan Manohar (4)
- Christian Matt (2)
- Moni Naor (1)
- Adam O'Neill (1)
- Peter M. R. Rasmussen (1)
- Sagnik Saha (1)
- Amit Sahai (17)
- Daniel Wichs (1)
- Eylon Yogev (1)
- Mark Zhandry (1)