International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Ilan Komargodski

ORCID: 0000-0002-1647-2112

Publications

Year
Venue
Title
2024
TCC
On the (Im)possibility of Game-Theoretically Fair Leader Election Protocols
Ohad Klein Ilan Komargodski Chenzhi Zhu
We consider the problem of electing a leader among $n$ parties with the guarantee that each (honest) party has a reasonable probability of being elected, even in the presence of a coalition that controls a subset of parties, trying to bias the output. This notion is called ``game-theoretic fairness'' because such protocols ensure that following the honest behavior is an equilibrium and also the best response for every party and coalition. In the two-party case, Blum's commit-and-reveal protocol (where if one party aborts, then the other is declared the leader) satisfies this notion and it is also known that one-way functions are necessary. Recent works study this problem in the multi-party setting. They show that composing Blum's 2-party protocol for $\log n$ rounds in a tournament-tree-style manner results with {perfect game-theoretic fairness}: each honest party has probability $\ge 1/n$ of being elected as leader, no matter how large the coalition is. Logarithmic round complexity is also shown to be necessary if we require perfect fairness against a coalition of size $n-1$. Relaxing the above two requirements, i.e., settling for approximate game-theoretic fairness and guaranteeing fairness against only constant fraction size coalitions, it is known that there are $O(\log ^* n)$ round protocols. This leaves many open problems, in particular, whether one can go below logarithmic round complexity by relaxing only one of the strong requirements from above. We manage to resolve this problem for commit-and-reveal style protocols, showing that \begin{itemize} \item $\Omega(\log n/\log\log n)$ rounds are necessary if we settle for approximate fairness against very large (more than constant fraction) coalitions; \item $\Omega(\log n)$ rounds are necessary if we settle for perfect fairness against $n^\epsilon$ size coalitions (for any constant $\epsilon>0$). \end{itemize} These show that both relaxations made in prior works are necessary to go below logarithmic round complexity. Lastly, we provide several additional upper and lower bounds for the case of single-round commit-and-reveal style protocols.
2023
EUROCRYPT
Fully Adaptive Decentralized Multi-Authority ABE
Pratish Datta Ilan Komargodski Brent Waters
Decentralized multi-authority attribute-based encryption (MA-ABE) is a distributed generalization of standard (ciphertext-policy) attribute-based encryption where there is no trusted central authority: any party can become an authority and issue private keys, and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. We present the first multi-authority attribute-based encryption schemes that are provably fully-adaptively secure. Namely, our construction is secure against an attacker that may corrupt some of the authorities as well as perform key queries adaptively throughout the life-time of the system. Our main construction relies on a prime order bilinear group where the k-linear assumption holds as well as on a random oracle. Along the way, we present a conceptually simpler construction relying on a composite order bilinear group with standard subgroup decision assumptions as well as on a random oracle. Prior to this work, there was no construction that could resist adaptive corruptions of authorities, no matter the assumptions used. In fact, we point out that even standard complexity leveraging style arguments do not work in the multi-authority setting.
2023
EUROCRYPT
Maliciously-Secure MrNISC in the Plain Model
Rex Fernando Aayush Jain Ilan Komargodski
We study strong versions of round-optimal MPC. A recent work of Benhamouda and Lin (TCC '20) identified a version of secure multiparty computation (MPC), termed Multiparty reusable Non-Interactive Secure Computation (MrNISC), that combines at the same time several fundamental aspects of secure computation with standard simulation security into one primitive: round-optimality, succinctness, concurrency, and adaptivity. In more detail, MrNISC is essentially a two-round MPC protocol where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by broadcasting one message each. Anyone who sees these parties' commitments and evaluation messages (even an outside observer) can learn the function output and nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of computations. By now, there are several known MrNISC protocols from either (bilinear) group-based assumptions or from LWE. They all satisfy semi-malicious security (in the plain model) and require trusted setup assumptions in order to get malicious security. We are interested in maliciously secure MrNISC protocols in the plain model, without trusted setup. Since the standard notion of polynomial simulation is un-achievable in less than four rounds, we focus on security with \emph{super-polynomial}-time simulation (SPS). Our main result is the first maliciously secure SPS MrNISC in the plain model. The result is obtained by generically compiling any semi-malicious MrNISC and the security of our compiler relies on several well-studied assumptions of an indistinguishability obfuscator, DDH over Z^*_p and asymmetric pairing groups, and a time-lock puzzle (all of which need to be sub-exponentially hard). As a special case, we obtain the first 2-round maliciously secure SPS MPC based on well-founded assumptions. This MPC is also concurrently self-composable and its first message is short (i.e., its size is independent of the number of the participating parties) and reusable throughout any number of computations. Prior to our work, for two round maliciously secure MPC, neither concurrent MPC nor reusable MPC nor MPC with first message independent in the number of parties was known from any set of assumptions. Of independent interest is one of our building blocks: the first construction of a one-round non-malleable commitment scheme from well-studied assumptions, avoiding keyless hash functions and non-standard hardness amplification assumptions.
2023
EUROCRYPT
Optimal Security for Keyed Hash Functions: Avoiding Time-Space Tradeoffs for Finding Collisions
Cody Freitag Ashrujit Ghoshal Ilan Komargodski
Cryptographic hash functions map data of arbitrary size to a fixed size digest, and are one of the most commonly used cryptographic objects. As it is infeasible to design an individual hash function for every input size, variable-input length hash functions are built by designing and bootstrapping a single fixed-input length function that looks sufficiently random. To prevent trivial preprocessing attacks, applications often require not just a single hash function but rather a family of keyed hash functions. The most well-known methods for designing variable-input length hash function families from a fixed idealized function are the Merkle-Damgård and Sponge designs. The former underlies the SHA-1 and SHA-2 constructions and the latter underlies SHA-3. Unfortunately, recent works (Coretti et al. EUROCRYPT 2018, Coretti et al. CRYPTO 2018) show non-trivial time-space tradeoff attacks for finding collisions for both. Thus, this forces a parameter blowup (i.e., efficiency loss) for reaching a certain desired level of security. We ask whether it is possible to build families of keyed hash functions which are \emph{provably} resistant to any non-trivial time-space tradeoff attacks for finding collisions, without incurring significant efficiency costs. We present several new constructions of keyed hash functions that are provably resistant to any non-trivial time-space tradeoff attacks for finding collisions. Our constructions provide various tradeoffs between their efficiency and the range of parameters where they achieve optimal security for collision resistance. Our main technical contribution is proving optimal security bounds for converting a hash function with a fixed-sized input to a keyed hash function with (potentially larger) fixed-size input. We then use this keyed function as the underlying primitive inside the standard MD and Merkle tree constructions. We strongly believe that this paradigm of using a keyed inner hash function in these constructions is the right one, for which non-uniform security has not been analyzed prior to this work. The most well-known methods for designing variable-input length hash function families from a fixed idealized function are the Merkle-Damgård and Sponge designs. The former underlies the SHA-1 and SHA-2 constructions and the latter underlies SHA-3. Unfortunately, recent works (Coretti et al. EUROCRYPT 2018, Coretti et al. CRYPTO 2018) show non-trivial time-space tradeoffs for both schemes. Thus, this forces a parameter blowup (i.e., efficiency loss) for reaching a certain desired level of security. We ask whether it is possible to build families of keyed hash functions which are \emph{provably} resistant to any non-trivial time-space tradeoff attacks, without a significant cost in efficiency. We give several new constructions of keyed hash functions that are provably resistant to any non-trivial time-space tradeoffs attacks. Our constructions provide various tradeoffs between their efficiency and the range of parameters where they achieve optimal security. Our main technical contribution is proving optimal security bounds for converting a hash function with a fixed-sized input to a keyed hash function with (potentially larger) fixed-size input. We then use this keyed function as the underlying primitive inside the standard Merkle-Damgård and Merkle tree constructions. We strongly believe that this paradigm of using a keyed inner hash function in these constructions is the right one, for which non-uniform security has not been analyzed prior to this work.
2023
JOFC
Oblivious RAM with Worst-Case Logarithmic Overhead
We present the first Oblivious RAM (ORAM) construction that for N memory blocks supports accesses with worst-case $$O(\log N)$$ O ( log N ) overhead for any block size $$\Omega (\log N)$$ Ω ( log N ) while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an amortized sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $$\Theta (N)$$ Θ ( N ) overhead. The previously best ORAM in terms of worst-case overhead achieves $$O(\log ^2 N/\log \log N)$$ O ( log 2 N / log log N ) overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the “shuffled inputs” assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC’97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2023
JOFC
Decentralized Multi-authority ABE for $\textsf{NC}^1$ from BDH
Pratish Datta Ilan Komargodski Brent Waters
Decentralized multi-authority attribute-based encryption ( $$\textsf{MA}$$ MA - $$\textsf{ABE}$$ ABE ) is a strengthening of standard ciphertext-policy attribute-based encryption so that there is no trusted central authority: Any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. Essentially, any party can act as an authority for some attribute by creating a public key of its own and issuing private keys to different users that reflect their attributes. This paper presents the first $$\textsf{MA}$$ MA - $$\textsf{ABE}$$ ABE proven secure under the standard search variant of bilinear Diffie–Hellman ( CBDH ) and in the random oracle model. Our scheme supports all access policies captured by $$\textsf{NC}^1$$ NC 1 circuits. All previous constructions were proven secure in the random oracle model and additionally were based on decision assumptions such as the DLIN assumption, non-standard q -type assumptions, or subspace decision assumptions over composite-order bilinear groups.
2023
CRYPTO
New Bounds on the Local Leakage Resilience of Shamir's Secret Sharing Scheme
Ohad Klein Ilan Komargodski
We study the local leakage resilience of Shamir's secret sharing scheme. In Shamir's scheme, a random polynomial $f$ of degree $t$ is sampled over a field of size $p>n$, conditioned on $f(0)=s$ for a secret $s$. Any $t$ points can be used to fully recover $f$ and thereby $f(0)$. But, any $t-1$ evaluations of $f$ at non-zero coordinates are completely independent of $f(0)$. Recent works ask whether the secret remains hidden even if say only 1 bit of information is leaked from \emph{each} share, independently. This question is well motivated due to the wide range of applications of Shamir's scheme. For instance, it is known that if Shamir's scheme is leakage resilient in some range of parameters, then known secure computation protocols are secure in a local leakage model. Over characteristic-2 fields, the answer is known to be negative (e.g., Guruswami and Wootters, STOC~'16). Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO~'18) were the first to give a positive answer assuming computation is done over prime-order fields. They showed that if $t \ge 0.907n$, then Shamir's scheme is leakage resilient. Since then, there has been extensive efforts to improve the above threshold and after a series of works, the current record shows leakage resilience for $t\ge 0.78n$ (Maji et al., ISIT~'22). All existing analyses of Shamir's leakage resilience for general leakage functions follow a single framework for which there is a known barrier for any $t \le 0.5 n$. In this work, we a develop a new analytical framework that allows us to significantly improve upon the previous record and obtain additional new results. Specifically, we show: \begin{enumerate} \item Shamir's scheme is leakage resilient for any $t \ge 0.69n$. \item If the leakage functions are guaranteed to be ``balanced'' (i.e., splitting the domain of possible shares into 2 roughly equal-size parts), then Shamir's scheme is leakage resilient for any $t \ge 0.58n$. \item If the leakage functions are guaranteed to be ``unbalanced'' (i.e., splitting the domain of possible shares into 2 parts of very different sizes), then Shamir's scheme is leakage resilient as long as $t \ge 0.01 n$. Such a result is \emph{provably} impossible to obtain using the previously known technique. \end{enumerate} All of the above apply more generally to any MDS codes-based secret sharing scheme. Confirming leakage resilience is most important in the range $t \leq n/2$, as in many applications, Shamir’s scheme is used with thresholds $t\leq n/2$. As opposed to the previous approach, ours does not seem to have a barrier at $t=n/2$, as demonstrated by our third contribution.
2023
TCC
3-Party Secure Computation for RAMs: Optimal and Concretely Efficient
A distributed oblivious RAM (DORAM) is a method for accessing a secret-shared memory while hiding the accessed locations. DORAMs are the key tool for secure multiparty computation (MPC) for RAM programs that avoids expensive RAM-to-circuit transformations. We present new and improved 3-party DORAM protocols. For a logical memory of size N and for each logical operation, our DORAM requires O(log N) local CPU computation steps. This is known to be asymptotically optimal. Our DORAM satisfies passive security in the honest majority setting. Our technique results with concretely-efficient protocols and does not use expensive cryptography (such as re-randomizable or homomorphic encryption). Specifically, our DORAM is 25X faster than the known most efficient DORAM in the same setting. Lastly, we extend our technique to handle malicious attackers at the expense of using slightly larger blocks (i.e., ω(log2 N) vs. Ω(log N)). To the best of our knowledge, this is the first concretely-efficient maliciously secure DORAM. Technically, our construction relies on a novel concretely-efficient 3-party oblivious permutation protocol. We combine it with efficient non-oblivious hashing techniques (i.e., Cuckoo hashing) to get a distributed oblivious hash table. From this, we build a full-fledged DORAM using a distributed variant of the hierarchical approach of Goldreich and Ostrovsky (J. ACM ’96). These ideas, and especially the permutation protocol, are of independent interest.
2023
TCC
Distributed-Prover Interactive Proofs
Interactive proof systems enable a verifier with limited resources to decide an intractable language (or compute a hard function) by communicating with a powerful but untrusted prover. Such systems guarantee soundness: the prover can only convince the verifier of true statements. This is a central notion in computer science with far-reaching implications. One key drawback of the classical model is that the data on which the prover operates must be held by a single machine. In this work, we initiate the study of distributed-prover interactive proofs (dpIPs): an untrusted cluster of machines, acting as a distributed prover, interacts with a single verifier. The machines in the cluster jointly store and operate on a massive data-set that no single machine can store. The goal is for the machines in the cluster to convince the verifier of the validity of some statement about its data-set. We formalize the communication and space constraints via the massively parallel computation (MPC) model, a widely accepted analytical framework capturing the computational power of massive data-centers. Our main result is a compiler that generically augments any verification algorithm in the MPC model with a soundness guarantee. Concretely, for any language $L$ for which there is an MPC algorithm verifying whether $x{\in} L$, we design a new MPC protocol capable of convincing a verifier of the validity of $x\in L$ and where if $x\not\in L$, the verifier will reject almost surely reject, no matter what. The new protocol requires only slightly more rounds, i.e., a $\mathsf{poly}(\log N)$ blowup, and a slightly bigger memory per machine, i.e., $\mathsf{poly}(\lambda)$ blowup, where $N$ is the total size of the dataset and $\lambda$ is a security parameter independent of $N$. En route, we introduce distributed-prover interactive oracle proofs (dpIOPs), a natural adaptation of the (by now classical) IOP model to the distributed prover setting. We design a dpIOP for algorithms in the MPC model and then tranlate them to ``plain model'' dpIPs via an adaptation of existing polynomial commitment schemes into the distributed prover setting.
2022
CRYPTO
On Time-Space Tradeoffs for Bounded-Length Collisions in Merkle-Damgård Hashing 📺
Ashrujit Ghoshal Ilan Komargodski
We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgard (MD) hashing in the random oracle model. Specifically, we consider adversaries which have arbitrary $S$-bit advice about the random oracle and can make at most $T$ queries to it. Our goal is to characterize the advantage of such adversaries in finding a $B$-block collision in an MD hash function constructed using the random oracle with range size $N$ as the compression function (given a random salt). The answer to this question is completely understood for very large values of $B$ (essentially $\Omega(T)$) as well as for $B=1,2$. For $B\approx T$, Coretti et al.~(EUROCRYPT '18) gave matching upper and lower bounds of $\tilde\Theta(ST^2/N)$. Akshima et al.~(CRYPTO '20) observed that the attack of Coretti et al.\ could be adapted to work for any value of $B>1$, giving an attack with advantage $\tilde\Omega(STB/N + T^2/N)$. Unfortunately, they could only prove that this attack is optimal for $B=2$. Their proof involves a compression argument with exhaustive case analysis and, as they claim, a naive attempt to generalize their bound to larger values of B (even for $B=3$) would lead to an explosion in the number of cases needed to be analyzed, making it unmanageable. With the lack of a more general upper bound, they formulated the \emph{STB conjecture}, stating that the best-possible advantage is $\tildeO(STB/N + T^2/N)$ for any $B>1$. In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of $B$, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of $B$, as long as some restriction is made on $S$. For instance, we confirm the conjecture for all $B \le T^{1/4}$ as long as $S \le T^{1/8}$. Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode.
2022
CRYPTO
Time-Space Tradeoffs for Sponge Hashing: Attacks and Limitations for Short Collisions 📺
Cody Freitag Ashrujit Ghoshal Ilan Komargodski
Sponge hashing is a novel alternative to the popular Merkle-Damg\aa rd hashing design. The sponge construction has become increasingly popular in various applications, perhaps most notably, it underlies the SHA-3 hashing standard. Sponge hashing is parametrized by two numbers, $r$ and $c$ (bitrate and capacity, respectively), and by a fixed-size permutation on $r+c$ bits. In this work, we study the collision resistance of sponge hashing instantiated with a random permutation by adversaries with an arbitrary $S$-bit auxiliary advice input about the random permutation and $T$ queries. Recent work by Coretti et al.\ (CRYPTO '18) showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Theta(ST^2/2^c + T^2/ 2^{r})$. Although the above attack formally breaks collision resistance in some range of parameters, its practical relevance is limited since the resulting collision is very long (on the order of $T$ blocks). Focusing on the task of finding \emph{short} collisions, we study the complexity of finding a $B$-block collision for a given parameter $B\ge 1$. We give several new attacks and limitations. Most notably, we give a new attack that results in a single-block collision and has advantage \begin{align*} \Omega \left(\left(\frac{S^{2}T}{2^{2c}}\right)^{2/3} + \frac{T^2}{2^r}\right). \end{align*} In some range of parameters, our attack has constant advantage of winning while the previously-known best attack has exponentially small advantage. To the best of our knowledge, this is the first natural application for which sponge hashing is \emph{provably less secure} than the corresponding instance of Merkle-Damg\aa rd hashing. Our attack relies on a novel connection between single-block collision finding in sponge hashing and the well-studied function inversion problem. We also give a general attack that works for any $B\ge 2$ and has advantage $\Omega({STB}/{2^{c}} + {T^2}/{2^{\min\{r,c\}}})$, adapting an idea of Akshima et al. (CRYPTO '20). We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions. Most notably, we prove (via a highly non-trivial compression argument) that the above attack is optimal for $B=2$ and in some range of parameters.
2022
CRYPTO
Maliciously Secure Massively Parallel Computation for All-but-One Corruptions 📺
The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS '20), focused on the honest majority setting. Later, Fernando et al. (TCC '20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against \emph{semi-honest} attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called \emph{malicious} security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a \emph{programmable} random oracle), it is provably \emph{impossible} to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol.
2022
CRYPTO
log∗-Round Game-Theoretically-Fair Leader Election 📺
It is well-known that in the presence of majority coalitions, strongly fair coin toss is impossible. A line of recent works have shown that by relaxing the fairness notion to game theoretic, we can overcome this classical lower bound. In particular, Chung et al. (CRYPTO'21) showed how to achieve approximately (game-theoretically) fair leader election in the presence of majority coalitions, with round complexity as small as O(log log n) rounds. In this paper, we revisit the round complexity of game-theoretically fair leader election. We construct O(log* n) rounds leader election protocols that achieve (1-o(1))-approximate fairness in the presence of (1-o(1)) n-sized coalitions. Our protocols achieve the same round-fairness trade offs as Chung et al.'s and have the advantage of being conceptually simpler. Finally, we also obtain game-theoretically fair protocols for committee election which might be of independent interest.
2022
JOFC
On the Complexity of Compressing Obfuscation
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far-reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (Functional signatures and pseudorandom functions, PKC, 2016) and Bitansky et al. (From Cryptomania to Obfustopia through secret-key functional encryption, TCC, 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists , whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2021
EUROCRYPT
Decentralized Multi-Authority ABE for DNFs from LWE 📺
Pratish Datta Ilan Komargodski Brent Waters
We construct the first decentralized multi-authority attribute-based encryption (????????-????????????) scheme for a non-trivial class of access policies whose security is based (in the random oracle model) solely on the Learning With Errors (LWE) assumption. The supported access policies are ones described by ???????????? formulas. All previous constructions of ????????-???????????? schemes supporting any non-trivial class of access policies were proven secure (in the random oracle model) assuming various assumptions on bilinear maps. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any ???????????? formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound ???? on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with ????. Technically, we develop new tools for building ciphertext-policy ABE (????????-????????????) schemes using LWE. Along the way, we construct the first provably secure ????????-???????????? scheme supporting access policies in ????????^1 under the LWE assumption that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to ????????-???????????? schemes that rely on bilinear maps. While our ????????-???????????? construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the ????????-???????????? scheme described above.
2021
EUROCRYPT
Multi-Party Reusable Non-Interactive Secure Computation from LWE 📺
Motivated by the goal of designing versatile and flexible secure computation protocols that at the same time require as little interaction as possible, we present new multiparty reusable Non-Interactive Secure Computation (mrNISC) protocols. This notion, recently introduced by Benhamouda and Lin (TCC 2020), is essentially two-round Multi-Party Computation (MPC) protocols where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by just sending a single message to a stateless evaluator, conveying the result of the computation but nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of desired computations. We give a construction of mrNISC that achieves standard simulation security, as classical multi-round MPC protocols achieve. Our construction relies on the Learning With Errors (LWE) assumption with polynomial modulus, and on the existence of a pseudorandom function (PRF) in $\mathsf{NC}^1$. We achieve semi-malicious security in the plain model and malicious security by further relying on trusted setup (which is unavoidable for mrNISC). In comparison, the only previously known constructions of mrNISC were either using bilinear maps or using strong primitives such as program obfuscation. We use our mrNISC to obtain new Multi-Key FHE (MKFHE) schemes with threshold decryption: - In the CRS model, we obtain threshold MKFHE for $\mathsf{NC}^1$ based on LWE with only {\em polynomial} modulus and PRFs in $\mathsf{NC}^1$, whereas all previous constructions rely on LWE with super-polynomial modulus-to-noise ratio. - In the plain model, we obtain threshold levelled MKFHE for $\mathsf{P}$ based on LWE with {\em polynomial} modulus, PRF in $\mathsf{NC}^1$, and NTRU, and another scheme for constant number of parties from LWE with sub-exponential modulus-to-noise ratio. The only known prior construction of threshold MKFHE (Ananth et al., TCC 2020) in the plain model restricts the set of parties who can compute together at the onset.
2021
CRYPTO
A Logarithmic Lower Bound for Oblivious RAM (for all parameters) 📺
Ilan Komargodski Wei-Kai Lin
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters. We observe that there is a very natural setting of parameters in which \emph{no} non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let $N$ and $w$ be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by $b$ the cell bit-size of the ORAM. \emph{All} previous ORAM lower bounds have a multiplicative $w/b$ factor which makes them trivial in many settings of parameters of interest. In this work, we prove a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). We show that any ORAM must make (amortized) $$ \Omega\left(\log \left(\frac{Nw}{m}\right)/\log\left(\frac{b}{w}\right)\right) $$ memory probes for every logical operation. Here, $m$ denotes the bit-size of the local storage of the ORAM. Our lower bound implies that logarithmic overhead in accesses is necessary, even if $ b \gg w$. Our lower bound is tight for \emph{all} settings of parameters, up to the $\log(b/w)$ factor. Our bound also extends to the non-colluding multi-server setting. As an application, we derive the first (unconditional) separation between the overhead needed for ORAMs in the \emph{online} vs.\ \emph{offline} models. Specifically, we show that when $w=\log N$ and $b,m \in poly\log N$, there exists an offline ORAM that makes (on average) $o(1)$ memory probes per logical operation while every online one must make $\Omega(\log N/\log\log N)$ memory probes per logical operation. No such previous separation was known for any setting of parameters, not even in the balls and bins model.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
Non-Malleable Codes for Bounded Parallel-Time Tampering 📺
Dana Dachman-Soled Ilan Komargodski Rafael Pass
Non-malleable codes allow one to encode data in such a way that once a codeword is being tampered with, the modified codeword is either an encoding of the original message, or a completely unrelated one. Since the introduction of this notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), there has been a large body of works realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, no code which is non-malleable for \emph{bounded} polynomial size attackers is known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against all polynomial size tampering functions that have {bounded} parallel time. This is an even larger class than all bounded polynomial size functions. In particular, this class includes all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we can obtain (sub-)exponentially small error.
2021
TCC
Non-Malleable Time-Lock Puzzles and Applications 📺
Time-lock puzzles are a mechanism for sending messages "to the future", by allowing a sender to quickly generate a puzzle with an underlying message that remains hidden until a receiver spends a moderately large amount of time solving it. We introduce and construct a variant of a time-lock puzzle which is non-malleable, which roughly guarantees that it is impossible to "maul" a puzzle into one for a related message without solving it. Using non-malleable time-lock puzzles, we achieve the following applications: - The first fair non-interactive multi-party protocols for coin flipping and auctions in the plain model without setup. - Practically efficient fair multi-party protocols for coin flipping and auctions proven secure in the (auxiliary-input) random oracle model. As a key step towards proving the security of our protocols, we introduce the notion of functional non-malleability, which protects against tampering attacks that affect a specific function of the related messages. To support an unbounded number of participants in our protocols, our time-lock puzzles satisfy functional non-malleability in the fully concurrent setting. We additionally show that standard (non-functional) non-malleability is impossible to achieve in the concurrent setting (even in the random oracle model).
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2020
EUROCRYPT
Continuous Verifiable Delay Functions 📺
We introduce the notion of a continuous verifiable delay function (cVDF): a function g which is (a) iteratively sequential---meaning that evaluating the iteration $g^{(t)}$ of g (on a random input) takes time roughly t times the time to evaluate g, even with many parallel processors, and (b) (iteratively) verifiable---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of t). In other words, the iterated function $g^{(t)}$ is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for t'<t) are publicly and continuously verifiable. We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct public randomness beacons that only require an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time. Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).
2020
EUROCRYPT
SPARKs: Succinct Parallelizable Arguments of Knowledge 📺
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument system with the following three properties for computing and proving a time T (non-deterministic) computation: - The prover's (parallel) running time is T + polylog T. (In other words, the prover's running time is essentially T for large computation times!) - The prover uses at most polylog T processors. - The communication complexity and verifier complexity are both polylog T. While the third property is standard in succinct arguments, the combination of all three is desirable as it gives a way to leverage moderate parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed. Our main results are the following, all for non-deterministic polynomial-time RAM computation. We construct (1) an (interactive) SPARK based solely on the existence of collision-resistant hash functions, and (2) a non-interactive SPARK based on any collision-resistant hash function and any SNARK with quasi-linear overhead (as satisfied by recent SNARK constructions).
2020
TCC
Secure Massively Parallel Computation for Dishonest Majority 📺
Rex Fernando Ilan Komargodski Yanyi Liu Elaine Shi
This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al. (ITCS ’20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt. We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with “short” output—that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE.
2019
EUROCRYPT
Distributional Collision Resistance Beyond One-Way Functions 📺
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions, and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et al. (STOC ’09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class $${\textsf {SZK}}$$ (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance.
2019
JOFC
Hardness-Preserving Reductions via Cuckoo Hashing
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $$\mathcal {U}$$U, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a “birthday attack”: after $$\sqrt{\left| \mathcal {U}\right| }$$U queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work, we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
2019
CRYPTO
Non-Uniformly Sound Certificates with Applications to Concurrent Zero-Knowledge 📺
Cody Freitag Ilan Komargodski Rafael Pass
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a “best-possible security” against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all $$\mathbf{NP }$$ in the random oracle model, where the attacker’s advice can depend arbitrarily on the random oracle.We next show that the existence of non-uniformly sound certificates for $$\mathbf{P }$$ (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for $$\mathbf{NP } $$.
2019
JOFC
From Minicrypt to Obfustopia via Private-Key Functional Encryption
Ilan Komargodski Gil Segev
Private-key functional encryption enables fine-grained access to symmetrically encrypted data. Although private-key functional encryption (supporting an unbounded number of keys and ciphertexts) seems significantly weaker than its public-key variant, its known realizations all rely on public-key functional encryption. At the same time, however, up until recently it was not known to imply any public-key primitive, demonstrating our poor understanding of this primitive. Bitansky et al. (Theory of cryptography—14th international conference, TCC 2016-B, 2016 ) showed that sub-exponentially secure private-key function encryption bridges from nearly exponential security in Minicrypt to slightly super-polynomial security in Cryptomania, and from sub-exponential security in Cryptomania to Obfustopia. Specifically, given any sub-exponentially secure private-key functional encryption scheme and a nearly exponentially secure one-way function, they constructed a public-key encryption scheme with slightly super-polynomial security. Assuming, in addition, a sub-exponentially secure public-key encryption scheme, they then constructed an indistinguishability obfuscator (or a public-key functional encryption scheme if the given building blocks are polynomially secure). We show that quasi-polynomially secure private-key functional encryption bridges from sub-exponential security in Minicrypt all the way to Cryptomania. First, given any quasi-polynomially secure private-key functional encryption scheme, we construct an indistinguishability obfuscator for circuits with inputs of poly-logarithmic length. Then, we observe that such an obfuscator can be used to instantiate many natural applications of indistinguishability obfuscation. Specifically, relying on sub-exponentially secure one-way functions, we show that quasi-polynomially secure private-key functional encryption implies not just public-key encryption but leads all the way to public-key functional encryption for circuits with inputs of poly-logarithmic length. Moreover, relying on sub-exponentially secure injective one-way functions, we show that quasi-polynomially secure private-key functional encryption implies a hard-on-average distribution over instances of a PPAD-complete problem. Underlying our constructions is a new transformation from single-input functional encryption to multi-input functional encryption in the private-key setting. The previously known such transformation (Brakerski et al. J Cryptol 31(2):434–520, 2018 ) required a sub-exponentially secure single-input scheme, and obtained a scheme supporting only a slightly super-constant number of inputs. Our transformation both relaxes the underlying assumption and supports more inputs: Given any quasi-polynomially secure single-input scheme, we obtain a scheme supporting a poly-logarithmic number of inputs.
2018
JOFC
2018
JOFC
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
CRYPTO
On the Complexity of Compressing Obfuscation 📺
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2018
CRYPTO
On Distributional Collision Resistant Hashing 📺
Ilan Komargodski Eylon Yogev
Collision resistant hashing is a fundamental concept that is the basis for many of the important cryptographic primitives and protocols. Collision resistant hashing is a family of compressing functions such that no efficient adversary can find any collision given a random function in the family.In this work we study a relaxation of collision resistance called distributional collision resistance, introduced by Dubrov and Ishai (STOC ’06). This relaxation of collision resistance only guarantees that no efficient adversary, given a random function in the family, can sample a pair (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x.Our first result shows that distributional collision resistance can be based on the existence of multi-collision resistance hash (with no additional assumptions). Multi-collision resistance is another relaxation of collision resistance which guarantees that an efficient adversary cannot find any tuple of $$k>2$$ inputs that collide relative to a random function in the family. The construction is non-explicit, non-black-box, and yields an infinitely-often secure family. This partially resolves a question of Berman et al. (EUROCRYPT ’18). We further observe that in a black-box model such an implication (from multi-collision resistance to distributional collision resistance) does not exist.Our second result is a construction of a distributional collision resistant hash from the average-case hardness of SZK. Previously, this assumption was not known to imply any form of collision resistance (other than the ones implied by one-way functions).
2017
EUROCRYPT
2017
CRYPTO
2017
ASIACRYPT
2017
TCC
2017
JOFC
2016
EUROCRYPT
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2015
TCC
2014
ASIACRYPT
2013
TCC

Program Committees

Eurocrypt 2024
TCC 2024
Crypto 2023
PKC 2022
Asiacrypt 2022
TCC 2021
TCC 2020
Eurocrypt 2019
Asiacrypt 2018