CryptoDB
Daniel Wichs
Publications
Year
Venue
Title
2024
EUROCRYPT
Laconic Function Evaluation, Functional Encryption and Obfuscation for RAMs with Sublinear Computation
Abstract
Laconic function evaluation (LFE) is a ``flipped'' version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for \emph{circuits} under LWE, and for \emph{Turing Machines (TMs)} from indistinguishability obfuscation (iO). In this work we introduce LFE for \emph{Random-Access Machines} (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a \emph{weakly efficient} variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a \emph{strongly efficient} variant, where the client's run-time must be sublinear in both $T$ and $|y|$. We construct the the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO. We then show how to leverage strongly efficient RAM-LFE to also get (many-key) \emph{functional encryption for RAMs (RAM-FE)} where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as \emph{iO for RAMs} where the obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.
2024
CRYPTO
Laconic Function Evaluation and ABE for RAMs from (Ring-)LWE
Abstract
Laconic function evaluation (LFE) allows us to compress a circuit $f$ into a short digest. Anybody can use this digest as a public-key to efficiently encrypt some input $x$. Decrypting the resulting ciphertext reveals the output $f(x)$, while hiding everything else about $x$. In this work we consider LFE for \emph{Random-Access Machines} (RAM-LFE) where, instead of a circuit $f$, we have a RAM program $f_{\DB}$ that potentially contains some large hard-coded data $\DB$. The decryption run-time to recover $f_{\DB}(x)$ from the ciphertext should be roughly the same as a plain evaluation of $f_{\DB}(x)$ in the RAM model, which can be sublinear in the size of $\DB$. Prior works constructed LFE for circuits under LWE, and RAM-LFE under indisitinguishability obfuscation (iO) and Ring-LWE. In this work, we construct RAM-LFE with essentially optimal encryption and decryption run-times from just Ring-LWE and a standard circular security assumption, without iO.
RAM-LFE directly yields 1-key succinct functional encryption and reusable garbling for RAMs with similar parameters.
If we only want an \emph{attribute-based} LFE for RAMs (RAM-AB-LFE), then we can replace Ring-LWE with plain LWE in the above. Orthogonally, if we only want \emph{leveled} schemes, where the encryption/decryption efficiency can scale with the depth of the RAM computation, then we can remove the need for a circular-security. Lastly, we also get a leveled many-key \emph{attribute-based encryption for RAMs (RAM-ABE)}, from LWE.
2024
CRYPTO
Doubly Efficient Cryptography: Commitments, Arguments and RAM MPC
Abstract
Can a sender commit to a long input without even reading all of it? Can a prover convince a verifier that an NP statement holds without even reading the entire witness? Can a set of parties run a multiparty computation (MPC) protocol in the RAM model, without necessarily even reading their entire inputs? We show how to construct such ``doubly efficient'' schemes in a setting where parties can preprocess their input offline, but subsequently they can engage in many different protocol executions over this input in sublinear online time. We do so in the plain model, without any common setup. Our constructions rely on doubly efficient private information retrieval (DEPIR) as a building block and can be instantiated based on Ring LWE.
In more detail, we begin by constructing doubly efficient (interactive) commitments, where the sender preprocesses the input offline, and can later commit to this input to arbitrary receivers in sublinear online time. Moreover, the sender can open individual bits of the committed input in sublinear time. We then use these commitments to implement doubly succinct (interactive) arguments, where the prover preprocesses the statement/witness offline, and can subsequently run many proof protocols to convince arbitrary verifiers of the statement's validity in sublinear online time. Furthermore, we augment these to get a doubly efficient ``commit, prove and locally open'' protocol, where the prover can commit to a long preprocessed input, prove that it satisfies some global property, and locally open individual bits, all in sublinear time. Finally, we leverage these tools to construct a RAM-MPC with malicious security in the plain model. Each party individually preprocesses its input offline, and can then run arbitrary MPC executions over this input with arbitrary other parties. The online run-time of each MPC execution is only proportional to the RAM run-time of the underlying program, that can be sublinear in the input size.
2024
CRYPTO
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
Abstract
It is well-known that classical Private Information Retrieval
(PIR) schemes without preprocessing must suffer from linear server com-
putation per query, and moreover, any classical single-server PIR with
sublinear bandwidth must rely on “public-key operations”. Several re-
cent works showed that these barriers pertaining to classical PIR can be
overcome by introducing a preprocessing phase where each client down-
loads a hint that helps it makes queries subsequently. Notably, the Piano
PIR scheme (and subsequent improvements) showed that with such a
client-specific preprocessing, not only can we have PIR with sublinear
computation and bandwidth per query, somewhat surprisingly, we can
also get it using only symmetric-key operations (i.e., one-way functions).
In this paper, we take the question of minimizing cryptographic assump-
tions to an extreme. Specifically, we are the first to explore the landscape
of information theoretic single-server preprocessing PIR. We make con-
tributions on both the upper- and lower-bounds fronts. First, we show
new information-theoretic constructions with non-trivial performance
bounds. Second, we prove a (nearly) tight lower bound on the client-
space and bandwidth tradeoff. Moreover, we also prove that natural ap-
proaches towards constructing preprocessing PIR with better-than-Piano
client-space/bandwidth tradeoff would imply a hard SZK problem which
cannot be constructed in a black-box fashion from one-way functions or
collision-resistant hashing. This shows that Piano achieves (nearly) opti-
mal client space and bandwidth tradeoff subject to using only symmetric-
key operations. The techniques for proving our new upper- and lower-
bounds can also be of independent interest.
2024
ASIACRYPT
Interval Key-Encapsulation Mechanism
Abstract
Forward-Secure Key-Encapsulation Mechanism (FS-KEM; Canetti et al. Eurocrypt 2003) allows Alice to encapsulate a key k to Bob for some time t such that Bob can decapsulate it at any time t'=<t. Crucially, a corruption of Bob's secret key after time t does not reveal k.
In this work, we generalize and extend this idea by also taking Post-Compromise Security (PCS) into account and call it Interval Key-Encapsulation Mechanism (IKEM). Thus, we do not only protect confidentiality of previous keys against future corruptions but also confidentiality of future keys against past corruptions. For this, Bob can regularly renew his secret key and inform others about the corresponding public key. IKEM enables Bob to decapsulate keys sent to him over an interval of time extending into the past, in case senders have not obtained his latest public key; forward security only needs to hold with respect to keys encapsulated before this interval. This basic IKEM variant can be instantiated based on standard KEM, which we prove to be optimal in terms of assumptions as well as ciphertext and key sizes.
We also extend this notion of IKEM for settings in which Bob decapsulates (much) later than Alice encapsulates (e.g., in high-latency or segmented networks): if a third user Charlie forwards Alice's ciphertext to Bob and, additionally, knows a recently renewed public key of Bob's, Charlie could re-encrypt the ciphertext for better PCS. We call this extended notion IKEMR. Our first IKEMR construction based on trapdoor permutations has (almost) constant sized ciphertexts in the number of re-encryptions; and our second IKEMR construction based on FS-PKE has constant sized public keys in the interval size.
Finally, to bypass our lower bound on the IKEM(R) secret key size, which must be linear in the interval size, we develop a new Interval RAM primitive with which Bob only stores a constant sized part of his secret key locally, while outsourcing the rest to a (possibly adversarial) server.
For all our constructions, we achieve security against active adversaries. For this, we obtain new insights on Replayable CCA security for KEM-type primitives, which might be of independent interest.
2024
TCC
Adaptively Secure Attribute-Based Encryption from Witness Encryption
Abstract
Attribute-based encryption (ABE) enables fine-grained control over which ciphertexts various users can decrypt. A master authority can create secret keys $\sk_f$ with different functions (circuits) $f$ for different users. Anybody can encrypt a message under some attribute $x$ so that only recipients with a key $\sk_f$ for a function such that $f(x)=1$ will be able to decrypt. There are a number of different approaches toward achieving selectively secure ABE, where the adversary has to decide on the challenge attribute $x$ ahead of time before seeing any keys, including constructions via bilinear maps (for NC1 circuits), learning with errors, or witness encryption. However, when it comes adaptively secure ABE, the problem seems to be much more challenging and we only know of two potential approaches: via the ``dual systems'' methodology from bilinear maps, or via indistinguishability obfuscation. In this work, we give a new approach that constructs adaptively secure ABE from witness encryption (along with statistically sound NIZKs and one-way functions). While witness encryption is a strong assumption, it appears to be fundamentally weaker than indistinguishability obfuscation. Moreover, we have candidate constructions of witness encryption from some assumptions (e.g., evasive LWE) from which we do not know how to construct indistinguishability obfuscation, giving us adaptive ABE from these assumptions as a corollary of our work.
2024
CIC
Efficient Maliciously Secure Oblivious Exponentiations
Abstract
<p> Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance. Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments. Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers. </p>
2023
PKC
A Map of Witness Maps: New Definitions and Connections
Abstract
A \emph{witness map} deterministically maps a witness $w$ of some NP statement $x$ into computationally sound proof that $x$ is true, with respect to a public common reference string (CRS). In other words, it is a deterministic, non-interactive, computationally sound proof system in the CRS model. A \emph{unique witness map} (UWM) ensures that for any fixed statement $x$, the witness map should output the same \emph{unique} proof for $x$, no matter what witness $w$ it is applied to. More generally a \emph{compact witness map} (CWM) can only output one of at most $2^\alpha$ proofs for any given statement $x$, where $\alpha$ is some compactness parameter. Such compact/unique witness maps were proposed recently by Chakraborty, Prabhakaran and Wichs (PKC '20) as a tool for building tamper-resilient signatures, who showed how to construct UWMs from indistinguishability obfuscation (iO). In this work, we study CWMs and UWMs as primitives of independent interest and present a number of interesting connections to various notions in cryptography.
\begin{itemize}
\item First, we show that UWMs lie somewhere between witness PRFs (Zhandry; TCC '16) and iO -- they imply the former and are implied by the latter. In particular, we show that a relaxation of UWMs to the ``designated verifier (dv-UWM)'' setting is \emph{equivalent} to witness PRFs. Moreover, we consider two flavors of such dv-UWMs, which correspond to two flavors of witness PRFs previously considered in the literature, and show that they are all in fact equivalent to each other in terms of feasibility.
\item Next, we consider CWMs that are extremely compact, with $\alpha = O(\log \kappa)$, where $\kappa$ is the security parameter. We show that such CWMs imply \emph{pseudo-UWMs} where the witness map is allowed to be \emph{pseudo-deterministic} -- i.e., for every true statement $x$, there is a unique proof such that, on any witness $w$, the witness map outputs this proof with $1-1/p(\lambda)$ probability, for a polynomial $p$ that we can set arbitrarily large.
\item Lastly, we consider CWMs that are mildly compact, with $\alpha = p(\lambda)$ for some a-priori fixed polynomial $p$, independent of the length of the statement $x$ or witness $w$. Such CWMs are implied by succinct non-interactive arguments (SNARGs). We show that such CWMs imply NIZKs, and therefore lie somewhere between NIZKs and SNARGs.
\end{itemize}
2023
EUROCRYPT
Speak Much, Remember Little: Cryptography in the Bounded Storage Model, Revisited
Abstract
The goal of the bounded storage model (BSM) is to construct unconditionally secure cryptographic protocols, by only restricting the storage capacity of the adversary, but otherwise giving it unbounded computational power. Here, we consider a streaming variant of the BSM, where honest parties can stream huge amounts of data to each other so as to overwhelm the adversary's storage, even while their own storage capacity is significantly smaller than that of the adversary. Prior works showed several impressive results in this model, including key agreement and oblivious transfer, but only as long as adversary's storage $m = O(n^2)$ is at most quadratically larger than the honest user storage $n$. Moreover, the work of Dziembowski and Maurer (DM) also gave a seemingly matching lower bound, showing that key agreement in the BSM is impossible when $m > n^2$.
In this work, we observe that the DM lower bound only applies to a significantly more restricted version of the BSM, and does not apply to the streaming variant. Surprisingly, we show that it is possible to construct key agreement and oblivious transfer protocols in the streaming BSM, where the adversary's storage can be significantly larger, and even exponential $m = 2^{O(n)}$. The only price of accommodating larger values of $m$ is that the round and communication complexities of our protocols grow accordingly, and we provide lower bounds to show that an increase in rounds and communication is necessary.
As an added benefit of our work, we also show that our oblivious transfer (OT) protocol in the BSM satisfies a simulation-based notion of security. In contrast, even for the restricted case of $m = O(n^2)$, prior solutions only satisfied a weaker indistinguishability based definition. As an application of our OT protocol, we get general multiparty computation (MPC) in the BSM that allows for up to exponentially large gaps between $m$ and $n$, while also achieving simulation-based security.
2023
JOFC
Adaptively Secure MPC with Sublinear Communication Complexity
Abstract
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity.
2023
CRYPTO
Universal Amplification of KDM Security: From 1-Key Circular to Multi-Key KDM
Abstract
An encryption scheme is Key Dependent Message (KDM) secure if it is safe to encrypt messages that can arbitrarily depend on the secret keys themselves. In this work, we show how to upgrade essentially the weakest form of KDM security into the strongest one. In particular, we assume the existence of a symmetric-key bit-encryption that is circular-secure in the 1-key setting, meaning that it maintains security even if one can encrypt individual bits of a single secret key under itself. We also rely on a standard CPA-secure public-key encryption. We construct a public-key encryption scheme that is KDM secure for general functions (of a-priori bounded circuit size) in the multi-key setting, meaning that it maintains security even if one can encrypt arbitrary functions of arbitrarily many secret keys under each of the public keys. As a special case, the latter guarantees security in the presence of arbitrary length key cycles. Prior work already showed how to amplify n-key circular to n-key KDM security for general functions. Therefore, the main novelty of our work is to upgrade from 1-key to n-key security for arbitrary n.
As an independently interesting feature of our result, our construction does not need to know the actual specification of the underlying 1-key circular secure scheme, and we only rely on the existence of some such scheme in the proof of security. In particular, we present a universal construction of a multi-key KDM-secure encryption that is secure as long as some 1-key circular-secure scheme exists. While this feature is similar in spirit to Levin's universal construction of one-way functions, the way we achieve it is quite different technically, and does not come with the same galactic inefficiency.
2023
CRYPTO
The Pseudorandom Oracle Model and Ideal Obfuscation
Abstract
We introduce a new idealized model of hash functions, which we refer to as the *pseudorandom oracle* (PrO) model. Intuitively, it allows us to model cryptosystems that use the code of an ideal hash function in a non-black-box way. Formally, we model hash functions via a combination of a pseudorandom function (PRF) family and an ideal oracle. A user can initialize the hash function by choosing a PRF key $k$ and mapping it to a public handle $h$ using the oracle. Given the handle $h$ and some input $x$, the oracle can also be called to evaluate the PRF at $x$ with the corresponding key $k$. A user who chooses the PRF key $k$ therefore has a complete description of the hash function and can use its code in non-black-box constructions, while an adversary, who just gets the handle $h$, only has black-box access to the hash function via the oracle.
As our main result, we show how to construct ideal obfuscation in the PrO model, starting from functional encryption (FE), which in turn can be based on well-studied polynomial hardness assumptions. In contrast, we know that ideal obfuscation cannot be instantiated in the basic random oracle model under any assumptions. We believe our result provides heuristic justification for the following: (1) most natural security goals implied by ideal obfuscation can be achieved in the real world; (2) obfuscation can be constructed from FE at polynomial security loss.
We also discuss how to interpret our result in the PrO model as a construction of ideal obfuscation using simple hardware tokens or as a way to bootstrap ideal obfuscation for PRFs to that for all functions.
2023
TCC
Lower Bounds on Anonymous Whistleblowing
Abstract
Anonymous transfer, recently introduced by Agrikola, Couteau and Maier [ACM22] (TCC '22), allows a sender to leak a message anonymously by participating in a public non-anonymous discussion where everyone knows who said what. This opens up the intriguing possibility of using cryptography to ensure strong anonymity guarantees in a seemingly non-anonymous environment.
The work of [ACM22] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees (where the adversary's advantage in identifying the sender is negligible) against arbitrary polynomial-time adversaries. They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees (the adversary's advantage in identifying the sender is inverse in the number of rounds) against fine-grained adversaries whose run-time is bounded by some fixed polynomial that exceeds the run-time of the honest users. This leaves a large gap between the lower bound and the upper bound, raising the intriguing possibility that one may be able to achieve weak anonymity against arbitrary polynomial time adversaries, or strong anonymity against fine grained adversaries.
In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities:
- We rule out the existence of anonymous transfer with any non-trivial anonymity guarantees against general polynomial time adversaries.
- Even if we restrict ourselves to fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we cannot achieve strong anonymity, or even quantitatively improve over the inverse polynomial anonymity guarantees (heuristically) achieved by [ACM22].
Consequently, constructions of anonymous transfer can only provide security against fine-grained adversaries, and even in that case they achieve at most weak quantitative forms of anonymity.
2023
TCC
Security with Functional Re-Encryption from CPA
Abstract
The notion of functional re-encryption security (funcCPA) for public-key encryption schemes was recently introduced by Akavia et al. (TCC'22), in the context of homomorphic encryption.
This notion lies in between CPA security and CCA security: we give the attacker a *functional re-encryption oracle* instead of the decryption oracle of CCA security. This oracle takes a ciphertext ct and a function f, and returns fresh encryption of f applied to the decryption of ct; in symbols, ct'=Enc(f(Dec(ct))).
In this work we observe that funcCPA security may have applications beyond homomorphic encryption, and set out to study its properties. As our main contribution, we prove that
funcCPA is "closer to CPA than to CCA"; that is, funcCPA secure encryption can be constructed in a black-box manner from CPA-secure encryption. We stress that, prior to our work, this was not known even for regular re-encryption queries corresponding to identity function f.
At the core of our result is a new technique, showing how to handle *adaptive* functional re-encryption queries using tools previously developed in the context of non-malleable encryption, which roughly corresponds to a single *non-adaptive* parallel decryption query.
2023
TCC
Multi-Instance Randomness Extraction and Security against Bounded-Storage Mass Surveillance
Abstract
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) 1% of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) 1% of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency.
As the core technical tool, we study an information-theoretic problem which we refer to as "multi-instance randomness extraction". Suppose $X_1$, $\ldots$, $X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1,\ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = Ext(X_i; S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
2022
EUROCRYPT
Incompressible Cryptography
📺
Abstract
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety.
In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
2022
EUROCRYPT
Authentication in the Bounded Storage Model
📺
Abstract
We consider the streaming variant of the Bounded Storage Model (BSM), where the honest parties can stream large amounts of data to each other, while only maintaining a small memory of size $n$. The adversary also operates as a streaming algorithm, but has a much larger memory size $m \gg n$. The goal is to construct unconditionally secure cryptographic schemes in the BSM, and prior works did so for symmetric-key encryption, key agreement, oblivious transfer and multiparty computation. In this work, we construct message authentication and signatures in the BSM.
First, we consider the symmetric-key setting, where Alice and Bob share a small secret key. Alice can authenticate arbitrarily many messages to Bob by streaming long authentication tags of size $k \gg m$, while ensuring that the tags can be generated and verified using only $n$ bits of memory. We show a solution using local extractors (Vadhan; JoC '04), which allows for up to exponentially large adversarial memory $m = 2^{O(n)}$, and has tags of size $k= O(m)$.
Second, we consider the same setting as above, but now additionally require each individual tag to be small, of size $k \leq n$. We show a solution is still possible when the adversary's memory is $m = O(n^2)$, which is optimal. Our solution relies on a space lower bound for leaning parities (Raz; FOCS '16).
Third, we consider the public-key signature setting. A signer Alice initially streams a long verification key over an authentic channel, while only keeping a short signing key in her memory. A verifier Bob receives the streamed verification key and generates some short verification digest that he keeps in his memory. Later, Alice can sign arbitrarily many messages using her signing key by streaming the signatures to Bob, who can verify them using his verification digest. We show a solution for $m= O(n^2)$, which we show to be optimal. Our solution relies on a novel entropy lemma, of independent interest. We show that, if a sequence of blocks has sufficiently high min-entropy, then a large fraction of individual blocks must have high min-entropy. Naive versions of this lemma are false, but we show how to patch it to make it hold.
2022
CRYPTO
Nearly Optimal Property Preserving Hashing
📺
Abstract
Property-preserving hashing (PPH) consists of a family of compressing hash functions $h$ such that, for any two inputs $x,y$, we can correctly identify whether some property $P(x,y)$ holds given only the digests $h(x),h(y)$. In a basic PPH, correctness should hold with overwhelming probability over the
choice of $h$ when $x,y$ are worst-case values chosen a-priori and independently of $h$. In an adversarially robust PPH (RPPH), correctness must hold even when $x,y$ are chosen adversarially and adaptively depending on $h$. Here, we study (R)PPH for the property that the Hamming distance between $x$ and $y$ is at most $t$.
The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS '19), and further studied by Fleischhacker, Simkin (Eurocrypt '21) and Fleischhacker, Larsen, Simkin (Eurocrypt '22). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal
parameters, and rely on more general assumptions than prior works. Our results are:
* We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal.
* We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal.
* We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions.
* Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters.
2022
ASIACRYPT
Witness Encryption and Null-IO from Evasive LWE
Abstract
Witness encryption (WE) allows us to use an arbitrary NP statement $x$ as a public key to encrypt a message, and the witness $w$ serves as a decryption key. Security ensures that, when the statement $x$ is false, the encrypted message remains computationally hidden. WE appears to be significantly weaker than indistinguishability obfuscation (iO). Indeed, WE is closely related to a highly restricted form of iO that only guarantees security for null circuits (null iO). However, all current approaches towards constructing WE under nice assumptions go through iO. Such constructions are quite complex and are unlikely to lead to practically instantiable schemes.
In this work, we revisit a very simple WE and null iO candidate of Chen, Vaikuntanathan and Wee (CRYPTO 2018). We show how to prove its security under a nice and easy-to-state assumption that we refer to as {\em evasive LWE} following Wee (EUROCRYPT 2022). Roughly speaking, the evasive LWE assumption says the following: assume we have some joint distributions over matrices $\mathbf{P}$, $\mathbf{S}$ and auxiliary information $\aux$ such that
$$({\bS\bB + \bE},{\bS \bP + \bE'}, \aux) \approx_c ({\bU},{\bU'}, \aux),$$
for a uniformly random (and secret) matrix $\mathbf{B}$, where $\mathbf{U}, \mathbf{U}'$ are uniformly random matrices, and $\mathbf{E},\mathbf{E}'$ are chosen from the LWE error distribution with appropriate parameters. Then it must also be the case that:
$$({\bS\bB + \bE}, \bB^{-1}(\bP),\aux) \approx_c (\bU, \bB^{-1}(\bP),\aux).$$
Essentially the above says that given ${\bS\bB + \bE}$, getting the additional component $\bB^{-1}(\bP)$ is no more useful than just getting the product $({\bS\bB + \bE})\cdot \bB^{-1}(\bP) \approx \bS \bP + \bE'$.
2022
TCC
Post-Quantum Insecurity from LWE
Abstract
We show that for many fundamental cryptographic primitives, proving classical security under the learning-with-errors (LWE) assumption, does \emph{not} imply post-quantum security. This is despite the fact that LWE is widely believed to be post-quantum secure, and our work does not give any evidence otherwise. Instead, it shows that post-quantum insecurity can arise inside cryptographic constructions, even if the assumptions are post-quantum secure.
Concretely, our work provides (contrived) constructions of pseudorandom functions, CPA-secure symmetric-key encryption, message-authentication codes, signatures, and CCA-secure public-key encryption schemes, all of which are proven to be classically secure under LWE via black-box reductions, but demonstrably fail to be post-quantum secure. All of these cryptosystems are stateless and non-interactive, but their security is defined via an interactive game that allows the attacker to make oracle queries to the cryptosystem. The polynomial-time quantum attacker can break these schemes by only making a few \emph{classical} queries to the cryptosystem, and in some cases, a single query suffices.
Previously, we only had examples of post-quantum insecurity under post-quantum assumptions for stateful/interactive protocols. Moreover, there appears to be a folklore belief that for stateless/non-interactive cryptosystems with black-box proofs of security, a quantum attack against the scheme should translate into a quantum attack on the assumption. This work shows otherwise. Our main technique is to carefully embed interactive protocols inside the interactive security games of the above primitives.
As a result of independent interest, we also show a 3-round \emph{quantum disclosure of secrets (QDS)} protocol between a classical sender and a receiver, where a quantum receiver learns a secret message in the third round but, assuming LWE, a classical receiver does not.
2021
TCC
Succinct LWE Sampling, Random Polynomials, and Obfuscation
📺
Abstract
We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudo-random LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.
2021
EUROCRYPT
Candidate Obfuscation via Oblivious LWE Sampling
📺
Abstract
We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security.
Our construction is based on the recent ``split FHE'' framework of Brakerski, D\"ottling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.
2021
CRYPTO
Limits on the Adaptive Security of Yao’s Garbling
📺
Abstract
Yao’s garbling scheme is one of the most fundamental cryptographic constructions. Lindell and Pinkas (Journal of Cryptograhy 2009) gave a formal proof of security in the selective setting assuming secure symmetric-key encryption (and hence one-way functions). This was fol- lowed by results, both positive and negative, concerning its security in the, stronger, adaptive setting. Applebaum et al. (Crypto 2013) showed that it cannot satisfy adaptive security as is, due to a simple incompressibility argument. Jafagholi and Wichs (TCC 2017) considered a natural adaptation of Yao’s scheme that circumvents this negative result, and proved that it is adaptively secure, at least for shallow circuits. In particular, they showed that for the class of circuits of depth d, the loss in security is at most exponential in d. The above results all concern the simulation-based notion of security.
In this work, we show that the upper bound of Jafargholi and Wichs is more or less optimal in a strong sense. As our main result, we show that there exists a family of Boolean circuits, one for each depth d ∈ N, such that any black-box reduction proving the adaptive indistinguishability- security of the natural adaptation of Yao’s scheme from any symmetric-key encryption has to lose a factor that is sub-exponential in d. Since indistinguishability is a weaker notion than simulation, our bound also applies to adaptive simulation.
To establish our results, we build on the recent approach of Kamath et al. (Eprint 2021), which uses pebbling lower bounds in conjunction with oracle separations to prove fine-grained lower bounds on loss in cryptographic security
2021
CRYPTO
Targeted Lossy Functions and Applications
📺
Abstract
Lossy trapdoor functions, introduced by Peikert and Waters (STOC '08), can be initialized in one of two indistinguishable modes: in injective mode, the function preserves all information about its input, and can be efficiently inverted given a trapdoor, while in lossy mode, the function loses some information about its input. Such functions have found countless applications in cryptography, and can be constructed from a variety of Cryptomania assumptions. In this work, we introduce \emph{targeted lossy functions (TLFs)}, which relax lossy trapdoor functions along two orthogonal dimensions. Firstly, they do not require an inversion trapdoor in injective mode. Secondly, the lossy mode of the function is initialized with some target input, and the function is only required to lose information about this particular target. The injective and lossy modes should be indistinguishable even given the target. We construct TLFs from Minicrypt assumptions, namely, injective pseudorandom generators, or even one-way functions under a natural relaxation of injectivity. We then generalize TLFs to incorporate \emph{branches}, and construct \emph{all-injective-but-one} and \emph{all-lossy-but-one} variants. We show a wide variety of applications of targeted lossy functions. In several cases, we get the first Minicrypt constructions of primitives that were previously only known under Cryptomania assumptions. Our applications include:
\begin{itemize}
\item \emph{Pseudo-entropy functions}
from one-way functions.
\item Deterministic leakage-resilient message-authentication codes and improved leakage-resilient symmetric-key encryption from one-way functions.
\item Extractors for \emph{extractor-dependent sources}
from one-way functions.
\item Selective-opening secure symmetric-key encryption from one-way functions.
\item A new construction of CCA PKE from (exponentially secure) trapdoor functions and injective pseudorandom generators.
\end{itemize}
We also discuss a fascinating connection to distributed point functions.
2021
TCC
Updatable Public Key Encryption in the Standard Model
📺
Abstract
Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated.
Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term *updatable* PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders.
Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes from the same assumptions) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE).
2021
JOFC
Is There an Oblivious RAM Lower Bound for Online Reads?
Abstract
Oblivious RAM (ORAM), introduced by Goldreich (STOC 1987) and Ostrovsky (STOC 1990), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ O ( log n ) overhead per access, where $$n$$ n is the data size. The work of Goldreich and Ostrovsky (JACM 1996) gave a lower bound, showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise (and the server is used solely as remote storage). The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”? The work of Boyle and Naor (ITCS 2016) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ o ( log n ) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO 2018) shows that there indeed is an $$\Omega (\log n)$$ Ω ( log n ) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM, in which the server is used solely as remote storage, where reads (but not writes ) have an $$o(\log n)$$ o ( log n ) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs) . Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN
📺
Abstract
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources
📺
Abstract
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness?
In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are:
* Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions.
* With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps
📺
Abstract
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security.
Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
PKC
Witness Maps and Applications
📺
Abstract
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors
📺
Abstract
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another.
We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model.
We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation:
-We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage.
-We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
CRYPTO
Incompressible Encodings
📺
Abstract
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation.
A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings:
* We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$.
* We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction.
* We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above.
* We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
TCC
Optimal Broadcast Encryption from LWE and Pairings in the Standard Model
📺
Abstract
Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry \cite{BWZ14}. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada \cite{AY20} improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) \cite{BW19} on bilinear groups.
Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future.
2019
EUROCRYPT
Private Anonymous Data Access
📺
Abstract
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH
📺
Abstract
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
EUROCRYPT
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
📺
Abstract
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of
$$1/2-1/\mathrm{poly}(n)$$
. Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise
$$1/2-1/\mathrm{poly}(n)$$
already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error
$$w/m \approx {\log ^2 n}/{n}$$
, (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio
$$1/2-1/\mathrm{poly}(n)$$
.Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class
$$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$
(i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be
$$\mathcal {NP}$$
-hard. We then show that the hardness of LPN with very low noise rate
$$\log ^2(n)/n$$
implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in
$$\mathcal {BPP}^\mathcal {SZK}$$
).
2019
CRYPTO
Non-malleable Codes for Decision Trees
📺
Abstract
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth
$$d= n^{1/4-o(1)}$$
. In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to d arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth
$$O(\log ^2 n)$$
.Our result also yields efficient, unconditional non-malleable codes that are
$$\exp (-n^{\varOmega (1)})$$
-secure against constant-depth circuits of
$$\exp (n^{\varOmega (1)})$$
-size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against
$$\exp (O(\log ^2n))$$
-size circuits with
$$\exp (-O(\log ^2n))$$
-security.We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.
2019
CRYPTO
On the Plausibility of Fully Homomorphic Encryption for RAMs
📺
Abstract
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively “rewinding” any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC ’17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead
$$N^\epsilon $$
for arbitrarily small
$$\epsilon >0$$
.We view our work as the first evidence that RAM-FHE is likely to exist.
2019
CRYPTO
Adaptively Secure MPC with Sublinear Communication Complexity
📺
Abstract
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols.Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation.
On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE.Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs
📺
Abstract
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
CRYPTO
Broadcast and Trace with
$N^{\varepsilon }$
Ciphertext Size from Standard Assumptions
📺
Abstract
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with N users, where the ciphertext size can be made as low as
$$O(N^\varepsilon )$$
, for any arbitrarily small constant
$$\varepsilon >0$$
. This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS ‘06), which had ciphertext size
$$O(N^{1/2})$$
. While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of N users, each of which gets a different secret key
$${\mathsf {sk}}_i$$
. In broadcast encryption, it is possible to create ciphertexts targeted to a subset
$$S \subseteq [N]$$
of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box D that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating D. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder D is able to decrypt ciphertexts targeted toward a set S of users, then it should be possible to trace one of the users in the set S responsible for creating D, even if other users outside of S also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO ’05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC ’18) under LWE, where the ciphertext size is at most poly-logarithmic in N. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
2019
JOFC
From Cryptomania to Obfustopia Through Secret-Key Functional Encryption
Abstract
Functional encryption lies at the frontiers of the current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO), while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key cryptography, but, on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret-key functional encryption to get “exponentially efficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS’15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black-box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
2018
CRYPTO
Hardness of Non-interactive Differential Privacy from One-Way Functions
📺
Abstract
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $$D \in X^n$$D∈Xn consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].
2018
PKC
Multi-Key Searchable Encryption, Revisited
Abstract
We consider a setting where users store their encrypted documents on a remote server and can selectively share documents with each other. A user should be able to perform keyword searches over all the documents she has access to, including the ones that others shared with her. The contents of the documents, and the search queries, should remain private from the server.This setting was considered by Popa et al. (NSDI ’14) who developed a new cryptographic primitive called Multi-Key Searchable Encryption (MKSE), together with an instantiation and an implementation within a system called Mylar, to address this goal. Unfortunately, Grubbs et al. (CCS ’16) showed that the proposed MKSE definition fails to provide basic security guarantees, and that the Mylar system is susceptible to simple attacks. Most notably, if a malicious Alice colludes with the server and shares a document with an honest Bob then the privacy of all of Bob’s search queries is lost.In this work we revisit the notion of MKSE and propose a new strengthened definition that rules out the above attacks. We then construct MKSE schemes meeting our definition. We first give a simple and efficient construction using only pseudorandom functions. This construction achieves our strong security definition at the cost of increasing the server storage overhead relative to Mylar, essentially replicating the document each time it is shared. We also show that high server storage overhead is not inherent, by giving an alternate (albeit impractical) construction that manages to avoid it using obfuscation.
2018
TCC
Traitor-Tracing from LWE Made Simple and Attribute-Based
Abstract
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
2018
TCC
Is There an Oblivious RAM Lower Bound for Online Reads?
Abstract
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an
$$O(\log n)$$
overhead per access, where
$$n$$
is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”?The work of Boyle and Naor (ITCS ’16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with
$$o(\log n)$$
overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an
$$\varOmega (\log n)$$
lower bound for general online ORAM.This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an
$$o(\log n)$$
overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2018
TCC
Watermarking PRFs Under Standard Assumptions: Public Marking and Security with Extraction Queries
Abstract
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.In this work, we construct a new PRF watermarking scheme with the following properties.
The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE.
The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.
2016
TCC
2015
ASIACRYPT
2014
CRYPTO
2014
CRYPTO
2013
JOFC
Fully Leakage-Resilient Signatures
Abstract
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
2012
EUROCRYPT
2008
EUROCRYPT
Program Committees
- Eurocrypt 2024 (Area chair)
- TCC 2024
- Crypto 2022
- Eurocrypt 2021
- TCC 2020
- Crypto 2018
- TCC 2017
- Eurocrypt 2017
- TCC 2015
- Asiacrypt 2014
- PKC 2014
- Crypto 2013
- TCC 2012
Coauthors
- Shweta Agrawal (2)
- Joël Alwen (3)
- Gilad Asharov (1)
- Marshall Ball (1)
- Carsten Baum (1)
- Jens Berlips (1)
- Alexander Bienstock (1)
- Allison Bishop (1)
- Nir Bitansky (4)
- Elette Boyle (2)
- Zvika Brakerski (1)
- Ran Canetti (1)
- David Cash (2)
- Suvradip Chakraborty (2)
- Walther Chen (1)
- Yilei Chen (1)
- Ran Cohen (2)
- Ronald Cramer (1)
- Dana Dachman-Soled (1)
- Ivan B. Damgård (1)
- Ivan Damgård (2)
- Lalita Devadas (1)
- Srinivas Devadas (1)
- Yevgeniy Dodis (17)
- Fangqi Dong (2)
- Nico Döttling (1)
- Stefan Dziembowski (2)
- Kevin M. Esvelt (1)
- Sebastian Faust (1)
- Serge Fehr (1)
- Christopher W. Fletcher (1)
- Leonard Foner (1)
- Juan A. Garay (1)
- Sanjam Garg (3)
- Rosario Gennaro (1)
- Craig Gentry (2)
- Shafi Goldwasser (1)
- Rishab Goyal (1)
- Dana Gretton (1)
- Jiaxin Guan (2)
- Siyao Guo (1)
- Mohammad Hajiabadi (1)
- Shai Halevi (4)
- Ariel Hamlin (3)
- Zihan Hao (2)
- Kristiyan Haralambiev (1)
- Carmit Hazay (2)
- Brett Hemenway (1)
- Dennis Hofheinz (1)
- Justin Holmgren (2)
- Yuval Ishai (1)
- Zahra Jafargholi (5)
- Abhishek Jain (3)
- Aayush Jain (1)
- Yael Tauman Kalai (2)
- Chethan Kamath (2)
- Harish Karthikeyan (1)
- Tomasz Kazana (2)
- Eike Kiltz (1)
- Saleet Klein (1)
- Karen Klein (2)
- Ilan Komargodski (1)
- Lucas Kowalczyk (1)
- Stephan Krenn (1)
- Alptekin Küpçü (2)
- Martin Kysel (1)
- Xin Li (1)
- Wei-Kai Lin (1)
- Huijia Lin (1)
- Minghao Liu (1)
- Alex Lombardi (3)
- Adriana López-Alt (2)
- Adriana López-Alt (3)
- Steve Lu (1)
- Ji Luo (1)
- Vadim Lyubashevsky (2)
- Fermi Ma (1)
- Tal Malkin (1)
- Daniel Masny (1)
- Ethan Mook (4)
- Tal Moran (2)
- Pratyay Mukherjee (2)
- Moni Naor (1)
- Jesper Buus Nielsen (2)
- Ryo Nishimaki (3)
- Tatsuaki Okamoto (1)
- Rafail Ostrovsky (3)
- Carles Padró (1)
- Omer Paneth (1)
- Alain Passelègue (2)
- Valerio Pastro (1)
- Krzysztof Pietrzak (6)
- Manoj Prabhakaran (2)
- Willy Quach (11)
- Rajmohan Rajaraman (1)
- Vanishree Rao (1)
- Mariana Raykova (1)
- Ling Ren (1)
- Ronald L. Rivest (1)
- Paul Rösler (1)
- Ron D. Rothblum (3)
- Lawrence Roy (1)
- Francesca Sage-Ling (1)
- Alessandra Scafuro (2)
- Gil Segev (3)
- Adi Shamir (2)
- Abhi Shelat (3)
- Elaine Shi (2)
- Noah Stephens-Davidowitz (1)
- Eran Tromer (1)
- LaKyah Tyner (2)
- Jonathan Ullman (1)
- Salil P. Vadhan (1)
- Vinod Vaikuntanathan (9)
- Marten van Dijk (1)
- Lynn Van Hauwe (1)
- Mayank Varia (1)
- Daniele Venturi (1)
- Theia Vogel (1)
- Shabsi Walfish (1)
- Brent Waters (6)
- Hoeteck Wee (7)
- Benjamin Weinstein-Raun (1)
- Mor Weiss (5)
- Daniel Wichs (93)
- Stephen Wooster (1)
- David J. Wu (1)
- Shota Yamada (1)
- Andrew C. Yao (1)
- Yu Yu (1)
- Mark Zhandry (3)
- Hong-Sheng Zhou (1)
- Giorgos Zirdelis (1)