|
Seventh IACR Theory of Cryptography Conference
TCC 2010
February 9-11, 2010, ETH Zurich
Zurich, Switzerland
|
 
|
An Efficient Parallel Repetition Theorem
Johan Håstad, Rafael Pass, Douglas Wikström, Krzysztof Pietrzak
We present a general parallel-repetition theorem with an efficient
reduction. As a corollary of this theorem we establish that
parallel repetition reduces the soundness error at an exponential
rate in any public-coin argument, and more generally, any argument
where the verifier's messages, but not necessarily its decision to
accept or reject, can be efficiently simulated with noticeable
probability.
Parallel Repetition Theorems for Interactive Arguments
Kai-Min Chung, Feng-Hao Liu
We study efficient parallel repetition theorems for several classes of
interactive arguments and obtain the following results:
- We show a tight parallel repetition theorem for public-coin
interactive arguments by giving a tight analysis for a reduction
algorithm of Håstad et al. [HPPW09]. That is, $n$-fold
parallel repetition decreases the soundness error from $\delta$ to
$\delta^n$. The crux of our improvement is a new analysis that avoid
using Raz's Sampling Lemma, which is the key ingredient to the previous
results.
- We give a new security analysis to strengthen a parallel repetition
theorem of Håstad et al. for a more general class of arguments. We
show that $n$-fold parallel repetition decreases the soundness error
from $\delta$ to $\delta^{n/2}$, which is almost tight. In particular,
we remove the dependency on the number of rounds in the bound, and as a
consequence, extend the «concurrent» repetition theorem of
Wikström [Wikstrom09] to this model.
- We obtain a way to turn any interactive argument to one in
the class above using fully homomorphic encryption schemes. This gives
a way to amplify the soundness of any interactive argument without
increasing the round complexity.
- We give a simple and generic transformation which shows that tight
direct product theorems imply almost-tight Chernoff-type theorems. This
extends our results to Chernoff-type theorems, and gives an alternative
proof to the Chernoff-type theorem of Impagliazzo et al.
[IJK09] for weakly-verifiable puzzles.
Almost Optimal Bounds for Direct Product Threshold Theorem
Charanjit S. Jutla
We consider weakly-verifiable puzzles which are challenge-response
puzzles such that the responder may not be able to verify for itself
whether it answered the challenge correctly. We consider $k$-wise direct
product of such puzzles, where now the responder has to solve $k$ puzzles
chosen independently in parallel. Canetti et al have earlier shown that
such direct product puzzles have a hardness which rises exponentially
with $k$. In the threshold case addressed in Impagliazzo et al, the
responder is required to answer correctly a fraction of challenges above
a threshold. The bound on hardness of this threshold parallel version was
shown to be similar to Chernoff bound, but the constants in the exponent
are rather weak. Namely, Impagliazzo et al show that for a puzzle for
which probability of failure is $\delta$, the probability of failing on
less than $(1-\gamma)\delta k$ out of $k$ puzzles, for any parallel
strategy, is at most $e^{-\gamma^2\delta k/40}$.
In this paper, we develop new techniques to bound this probability, and
show that it is arbitrarily close to Chernoff bound. To be precise, the
bound is $e^{-\gamma^2(1-\gamma) \delta k/2}$. We show that given any
responder that solves $k$ parallel puzzles with a good threshold, there
is a uniformized parallel solver who has the same threshold of solving
$k$ parallel puzzles, while being oblivious to the permutation of the
puzzles. This enhances the analysis considerably, and may be of
independent interest.
On Symmetric Encryption and Point Obfuscation
Ran Canetti, Yael Tauman Kalai, Mayank Varia, Daniel Wichs
We show tight connections between several cryptographic primitives,
namely encryption with weakly random keys, encryption with key-dependent
messages (KDM), and obfuscation of point functions with multi-bit output
(which we call multi-bit point functions, or MBPFs, for short). These
primitives, which have been studied mostly separately in recent works,
bear some apparent similarities, both in the flavor of their security
requirements and in the flavor of their constructions and assumptions.
Still, rigorous connections have not been drawn.
Our results can be interpreted as indicating that MBPF obfuscators imply
a very strong form of encryption that
simultaneously achieves
security for weakly-random keys and key-dependent messages as special
cases. Similarly, each one of the other primitives implies a certain
restricted form of MBPF obfuscation. Our results carry both constructions
and impossibility results from one primitive to others. In particular:
- The recent impossibility result for KDM security of Haitner and
Holenstein (TCC '09) carries over to MBPF obfuscators.
- The Canetti-Dakdouk construction of MBPF obfuscators based on a
strong variant of the DDH assumption (EC '08) gives an encryption
scheme which is secure w.r.t. any weak key distribution of
super-logarithmic min-entropy (and in particular, also has very strong
leakage resilient properties).
- All the recent constructions of encryption schemes that are secure
w.r.t. weak keys imply a weak form of MBPF obfuscators.
Obfuscation of Hyperplane Membership
Ran Canetti, Guy Rothblum, Mayank Varia
Previous work on program obfuscation gives strong negative results for
general-purpose obfuscators, and positive results for obfuscating simple
functions such as equality testing (point functions). In this work, we
construct an obfuscator for a more complex algebraic functionality:
testing for membership in a hyperplane (of constant dimension). We prove
the security of the obfuscator under a new strong variant of the
Decisional Diffie-Hellman assumption. Finally, we show a cryptographic
application of the new obfuscator to digital signatures.
Privacy-Enhancing Cryptography: From Theory into Practice
Jan Camenisch
We conduct an increasing part of our daily transactions electronically
and thereby we leave an eternal electronic trail of personal data. We
are almost never able to see what data about us we imprint, where it is
processed or where it is stored. Indeed, controlling the dispersal of
our data and protecting our privacy has become virtually impossible.
In this talk we will investigate the extent to which tools from
cryptography and what other technical means can help us to regain control
of our data and to save our privacy. To this end, we will review the
most important of the practical cryptographic mechanisms and discuss how
they could be applied. In a second part, we will report on the readiness
of the industry to indeed employ such technologies and on how governments
address the current erosion of privacy.
On Complete Primitives for Fairness
Dov Gordon, Yuval Ishai, Tal Moran, Rafail Ostrovsky, Amit Sahai
For secure two-party and multi-party computation with abort,
classification of which primitives are
complete has been
extensively studied in the literature. However, for
fair secure
computation, where (roughly speaking) either all parties learn the output
or none do, the question of complete primitives has remained largely
unstudied. In this work, we initiate a rigorous study of completeness
for primitives that allow fair computation. We show the following
results:
- No «short» primitive is complete for fairness. In
surprising contrast to other notions of security for secure two-party
computation, we show that for fair secure computation, no primitive of
size $O(\log k)$ is complete, where $k$ is a security parameter. This
is the case even if we can enforce parallelism in calls to the
primitives (i.e., the adversary does not get output from any primitive
in a parallel call until it sends input to all of them). This negative
result holds regardless of any computational assumptions.
- A fairness hierarchy. We clarify the fairness landscape
further by exhibiting the existence of a «fairness hierarchy». We
show that for every «short» $\ell = O(\log k)$, no protocol making
(serial) access to any $\ell$-bit primitive can be used to construct
even a $(\ell+1)$-bit simultaneous broadcast.
- Positive results. To complement the negative results, we
exhibit a $k$-bit primitive that is complete for two-party fair
secure computation. We show how to generalize this result to the
multi-party setting.
- Fairness combiners. We also introduce the question of
constructing a protocol for fair secure computation from primitives
that may be faulty. We show that this is possible when a majority of
the instances are honest. On the flip side, we show that this result is
tight: no functionality is complete for fairness if half (or more) of
the instances can be malicious.
On the Necessary and Sufficient Assumptions for UC Computation
Ivan Damgård, Jesper Buus Nielsen, Claudio Orlandi
We study the necessary and sufficient assumptions for universally
composable (UC) computation, both in terms of setup and computational
assumptions. We look at the common reference string model, the uniform
random string model and the key-registration authority model (KRA), and
provide new results for all of them. Perhaps most interestingly we show
that:
- For even the minimal meaningful KRA, where we only assume that the
secret key is a value which is hard to compute from the public key, one
can UC securely compute any poly-time functionality if there exists a
passive secure oblivious-transfer protocol for the stand-alone model.
Since a KRA where the secret keys can be computed from the public keys
is useless, and some setup assumption is needed for UC secure
computation, this establishes the best we could hope for the KRA model:
any non-trivial KRA is sufficient for UC computation.
- We show that in the KRA model one-way functions are sufficient for
UC commitment and UC zero-knowledge. These are the first examples of UC
secure protocols for non-trivial tasks which do not assume the
existence of public-key primitives. In particular, the protocols show
that non-trivial UC computation is possible in Minicrypt.
From Passive to Covert Security at Low Cost
Ivan Damgård, Martin Geisler, Jesper Buus Nielsen
Aumann and Lindell defined security against covert attacks,
where the adversary is malicious, but is only caught cheating with a
certain probability. The idea is that in many real-world cases, a large
probability of being caught is sufficient to prevent the adversary from
trying to cheat. In this paper, we show how to compile a passively
secure protocol for honest majority into one that is secure against
covert attacks, again for honest majority and catches cheating with
probability $1/4$. The cost of the modified protocol is essentially
twice that of the original plus an overhead that only depends on the
number of inputs.
A Twist on the Naor-Yung Paradigm and Its Application to Efficient CCA-Secure Encryption from Hard Search Problems
Ronald Cramer, Dennis Hofheinz, Eike Kiltz
The Naor-Yung (NY) paradigm shows how to build a chosen-ciphertext secure
encryption scheme from three conceptual ingredients:
- a weakly (i.e., IND-CPA) secure encryption scheme,
- a «replication strategy» that specifies how to use the weakly
secure encryption scheme; concretely, a NY-encryption contains several
weak encryptions of the same plaintext,
- a non-interactive zero-knowledge (NIZK) proof system to show that a
given ciphertext is consistent, i.e., contains weak encryptions of the
same plaintext.
The NY paradigm served both as a breakthrough proof-of-concept, and as an
inspiration to subsequent constructions. However, the NY construction
leads to impractical encryption schemes, due to the usually prohibitively
expensive NIZK proof.
In this contribution, we give a variant of the NY paradigm that leads to
practical, fully IND-CCA secure encryption schemes whose security can
be based on a generic class of algebraic complexity assumptions. Our
approach refines NY's approach as follows:
- Our sole computational assumption is that of a Diffie-Hellman (DH)
type two-move key exchange protocol, interpreted as a weakly secure key
encapsulation mechanism (KEM).
- Our «replication strategy» is as follows. Key generation consists
of replicating the KEM several times, but only the first pass.
Encryption then consists of performing the second pass with respect to
all of these, but with the same random coins in each instance.
- For proving consistency of a given ciphertext, we employ a
practical universal hash proof system, case-tailored to our KEM and
replication strategy.
We instantiate our paradigm both from
computational Diffie-Hellman
(CDH) and from RSA type assumptions. This way, practical IND-CCA secure
encryption schemes based on
search problems can be built and
explained in a generic, NY-like fashion.
We would like to stress that while we generalize universal hash proof
systems
as a proof system, we do
not follow or generalize
the approach of Cramer and Shoup to build IND-CCA secure encryption.
Their approach uses specific hash proof systems that feature, on top of a
NIZK property, a computational indistinguishability property. Hence they
necessarily build upon
decisional assumptions, whereas we show how
to implement our approach with
search assumptions. Our approach
uses hash proof systems in the NY way, namely solely as a device to prove
consistency. In our case, secrecy is provided by the «weak encryption»
component, which allows us to embed search problems.
Two Is A Crowd? A Black-Box Separation Of One-Wayness and Security Under Correlated Inputs
Yevgeniy Vahlis
A family of trapdoor functions is one-way under correlated inputs if no
efficient adversary can invert it even when given the value of the
function on multiple correlated inputs. This powerful primitive was
introduced at TCC 2009 by Rosen and Segev, who use it in an elegant black
box construction of a chosen ciphertext secure public key encryption. In
this work we continue the study of security under correlated inputs, and
prove that there is no black box construction of correlation secure
injective trapdoor functions from classic trapdoor permutations, even if
the latter is assumed to be one-way for inputs from high entropy, rather
than uniform distributions. Our negative result holds for all input
distributions where each $x_{i}$ is determined by the remaining $n-1$
coordinates. The techniques we employ for proving lower bounds about
trapdoor permutations are new and quite general, and we believe that they
will find other applications in the area of black-box separations.
Efficient, Robust and Constant-Round Distributed RSA Key Generation
Ivan Damgård, Gert Læssøe Mikkelsen
We present the first protocol for distributed RSA key generation which is
constant round, secure against malicious adversaries and has a negligibly
small bound on the error probability, even using only one iteration of
the underlying primality test on each candidate number.
Threshold Decryption and Zero-Knowledge Proofs for Lattice-Based Cryptosystems
Rikke Bendlin, Ivan Damgård
We present a variant of Regev's cryptosystem first presented in
[Regev05], but with a new choice of parameters. By a recent classical
reduction by Peikert we prove the scheme semantically secure based on the
worst-case lattice problem GapSVP. From this we construct a
threshold cryptosystem which has a very efficient and non-interactive
decryption protocol. We prove the threshold cryptosystem secure against
passive adversaries corrupting all but one of the players, and againts
active adversaries corrupting less than one third of the players. We also
describe how one can build a distributed key generation protocol. In the
final part of the paper we show how one can, in zero-knowledge - prove
knowledge of the plaintext contained in a given ciphertext from Regev's
original cryptosystem or our variant. The proof is of size only a
constant times the size of the public key.
Ideal Hierarchical Secret Sharing Schemes
Oriol Farràs, Carles Padró
Hierarchical secret sharing is among the most natural generalizations of
threshold secret sharing, and it has attracted a lot of attention from
the invention of secret sharing until nowadays. Several constructions of
ideal hierarchical secret sharing schemes have been proposed, but it was
not known what access structures admit such a scheme. We solve this
problem by providing a natural definition for the family of the
hierarchical access structures and, more importantly, by presenting a
complete characterization of the ideal hierarchical access structures,
that is, the ones admitting an ideal secret sharing scheme. Our
characterization deals with the properties of the hierarchically minimal
sets of the access structure, which are the minimal qualified sets whose
participants are in the lowest possible levels in the hierarchy. By
using our characterization, it can be efficiently checked whether any
given hierarchical access structure that is defined by its hierarchically
minimal sets is ideal. We use the well known connection between ideal
secret sharing and matroids and, in particular, the fact that every ideal
access structure is a matroid port. In addition, we use recent results
on ideal multipartite access structures and the connection between
multipartite matroids and integer polymatroids. We prove that every
ideal hierarchical access structure is the port of a representable
matroid and, more specifically, we prove that every ideal structure in
this family admits ideal linear secret sharing schemes over
fields of all characteristics. In addition, methods to construct such
ideal schemes can be derived from the results in this paper and the
aforementioned ones on ideal multipartite secret sharing. Finally, we
use our results to find a new proof for the characterization of the ideal
weighted threshold access structures that is simpler than the existing
one.
A Hardcore Lemma for Computational Indistinguishability: Security Amplification for Arbitrarily Weak PRGs with Optimal Stretch
Ueli Maurer, Stefano Tessaro
It is well known that two random variables $X$ and $Y$ with the same
range can be viewed as being equal (in a well-defined sense) with
probability $1-d(X,Y)$, where $d(X,Y)$ is their statistical distance,
which in turn is equal to the best distinguishing advantage for $X$ and
$Y$. In other words, if the best distinguishing advantage for $X$ and
$Y$ is $\epsilon$, then with probability $1-\epsilon$ they are completely
indistinguishable. This statement, which can be seen as an
information-theoretic version of a hardcore lemma, is for example very
useful for proving indistinguishability amplification results.
In this paper we prove the computational version of such a hardcore
lemma, thereby extending the concept of hardcore sets from the context of
computational hardness to the context of computational indistinguishability. This paradigm promises to have many applications
in cryptography and complexity theory. It is proven both in a
non-uniform and a uniform setting.
For example, for a weak pseudorandom generator (PRG) for which the
(computational) distinguishing advantage is known to be bounded by
$\epsilon$ (e.g. $\epsilon=\frac{1}{2}$), one can define an event on the
seed of the PRG which has probability at least $1-\epsilon$ and such
that, conditioned on the event, the output of the PRG is essentially
indistinguishable from a string with almost maximal min-entropy, namely
$\log(1/(1-\epsilon))$ less than its length. As an application, we show
an optimally efficient construction for converting a weak PRG for any
$\epsilon < 1$ into a strong PRG by proving that the intuitive
construction of applying an extractor to an appropriate number of
independent weak PRG outputs yields a strong PRG. This improves strongly
over the best previous construction for security amplification of PRGs
which does not work for $\epsilon \geq \frac{1}{2}$ and requires the seed
of the constructed strong PRG to be very long.
On Related-Secret Pseudorandomness
David Goldenberg, Moses Liskov
Related-key attacks are attacks against constructions which use a secret
key (such as a blockcipher) in which an attacker attempts to exploit
known or chosen relationships among keys to circumvent security
properties. Security against related-key attacks has been a subject of
study in numerous recent cryptographic papers. However, most of these
results are attacks on specific constructions, while there has been
little positive progress on constructing related-key secure primitives.
In this paper, we attempt to address the question of whether related-key
secure blockciphers can be built from traditional cryptographic
primitives. We develop a theoretical framework of «related-secret
secure» cryptographic primitives, a class of primitives which includes
related-key secure blockciphers and PRFs. We show that while a single
related-secret pseduorandom bit is sufficient and necessary to create
related-key secure blockciphers, hard-core bits with typical proofs are
not related-secret psuedorandom. Since the pseudorandomness of
hard-core bits is the essential technique known to make pseudorandomness
from assumptions of simple hardness, this presents a very strong barrier
to the development of provably related-key secure blockciphers based on
standard hardness assumptions.
A Domain Extender for the Ideal Cipher
Jean-Sebastien Coron, Yevgeniy Dodis, Avradip Mandal, Yannick Seurin
We describe the first domain extender for ideal ciphers, i.e. we
show a construction that is indifferentiable from a $2n$-bit ideal
cipher, given a $n$-bit ideal cipher. Our construction is based on a
$3$-round Feistel, and is more efficient than first building a $n$-bit
random oracle from a $n$-bit ideal cipher (as in [Coron2005]) and
then a $2n$-bit ideal cipher from a $n$-bit random oracle (as in
[Coron2008], using a $6$-round Feistel). We also show that $2$
rounds are not enough for indifferentiability by exhibiting a simple
attack. We also consider our construction in the standard model: we show
that $2$ rounds are enough to get a $2n$-bit tweakable block-cipher from
a $n$-bit tweakable block-cipher and we show that with $ 3 $ rounds we
can get beyond the birthday security bound.
Delayed-Key Message Authentication for Streams
Marc Fischlin, Anja Lehmann
We consider message authentication codes for streams where the key becomes
known only at the end of the stream. This usually happens in key-exchange
protocols like SSL and TLS where the exchange phase concludes by sending a
MAC for the previous transcript and the newly derived key. SSL and TLS
provide tailor-made solutions for this problem (modifying HMAC to insert
the key only at the end, as in SSL, or using upstream hashing as in TLS).
Here we take a formal approach to this problem of delayed-key MACs and
provide solutions which are «as secure as schemes where the key would be
available right away» but still allow to compute the MACs online even if
the key becomes known only later.
Founding Cryptography on Tamper-Proof Hardware Tokens
Vipul Goyal, Yuval Ishai, Amit Sahai, Ramarathnam Venkatesan, Akshay Wadia
A number of works have investigated using tamper-proof hardware tokens as
tools to achieve a variety of cryptographic tasks. In particular,
Goldreich and Ostrovsky considered the problem of software protection via
oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of
one-time programs: in a one-time program, an honest sender sends a
set of
simple hardware tokens to a (potentially malicious)
receiver. The hardware tokens allow the receiver to execute a secret
program specified by the sender's tokens exactly once (or, more
generally, up to a fixed $t$ times). A recent line of work initiated by
Katz examined the problem of achieving UC-secure computation using
hardware tokens.
Motivated by the goal of unifying and strengthening these previous
notions, we consider the general question of basing secure computation on
hardware tokens. We show that the following tasks, which cannot be
realized in the «plain» model, become feasible if the parties are
allowed to generate and exchange tamper-proof hardware tokens.
- Unconditional and non-interactive secure computation. We show
that by exchanging simple stateful hardware tokens, any
functionality can be realized with unconditional security against
malicious parties. In the case of two-party functionalities $f(x,y)$
which take their inputs from a sender and a receiver and deliver their
output to the receiver, our protocol is non-interactive and only
requires a unidirectional communication of simple stateful tokens from
the sender to the receiver. This strengthens previous feasibility
results for one-time programs both by providing unconditional
security and by offering general protection against malicious
senders. As is typically the case for unconditionally secure
protocols, our protocol is in fact UC-secure. This improves over
previous works on UC-secure computation based on hardware tokens, which
provided computational security under cryptographic assumptions.
- Interactive secure computation from stateless tokens based on
one-way functions. We show that stateless hardware tokens are
sufficient to base general secure (in fact, UC-secure) computation on
the existence of one-way functions.
- Obfuscation from stateless tokens. We consider the problem of
realizing non-interactive secure computation from stateless tokens for
functionalities which allow the receiver to provide an arbitrary number
of inputs (these are the only functionalities one can hope to realize
non-interactively with stateless tokens). By building on recent
techniques for resettably secure computation, we obtain a general
positive result under standard cryptographic assumptions. This gives
the first general feasibility result for program obfuscation using stateless tokens, while strengthening the standard notion of
obfuscation by providing security against a malicious sender.
Truly Efficient String Oblivious Transfer Using Resettable
Tamper-Proof Tokens
Vladimir Kolesnikov
SFE requires expensive public key operations for each input bit of the
function. This cost can be avoided by using tamper-proof hardware.
However, all known efficient techniques require the hardware to have
long-term secure storage and to be resistant to reset or duplication
attacks. This is due to the intrinsic use of counters or erasures.
Known techniques that use resettable tokens rely on expensive primitives,
such as generic concurrent ZK, and are out of reach of practice.
We propose a truly efficient String Oblivious Transfer (OT)
technique relying on resettable (actually, stateless)
tamper-proof token. Our protocols require between $6$ and $27$ symmetric key operations, depending on the model. Our OT is secure
against covert sender and malicious receiver, and is sequentially
composable.
If the token is semi-honest (e.g. if it is provided by a trusted entity,
but adversarily initialized), then our protocol is secure against
malicious adversaries in concurrent execution setting.
Only one party is required to provide the token, which makes it
appropriate for typical asymmetric client-server scenarios (banking, TV,
etc.)
Leakage-Resilient Signatures
Sebastian Faust, Eike Kiltz, Krzysztof Pietrzak, Guy Rothblum
The strongest standard security notion for digital signature schemes is
unforgeability under chosen message attacks. In practice, however, this
notion can be insufficient due to «side-channel attacks» which exploit
leakage of information about the secret internal state.
In this work we put forward the notion of «leakage-resilient
signatures,» which strengthens the standard security notion by
giving the adversary the additional power to learn a bounded amount
of arbitrary information about the secret state that was
accessed during every signature generation.
This notion naturally implies security against all
side-channel attacks as long as the amount of information leaked on
each invocation is bounded and «only computation leaks
information.»
The main result of this paper is a construction which gives a
(tree-based, stateful) leakage-resilient signature scheme based on any
3-time signature scheme. The amount of information that our scheme can
safely leak per signature generation is $1/3$ of the information
the underlying 3-time signature scheme can leak in total.
Signature schemes that remain secure even if a bounded total amount of
information is leaked were recently constructed, hence instantiating our
construction with these schemes gives the first constructions of provably
secure leakage-resilient signature schemes.
The above construction assumes that the signing algorithm can sample
truly random bits, and thus an implementation would need some special
hardware (randomness gates). Simply generating this randomness using a
leakage-resilient stream-cipher will in general not work. Our second
contribution is a sound general principle to replace uniform random bits
in any leakage-resilient construction with pseudorandom ones: run two
leakage-resilient stream-ciphers (with independent keys) in parallel and
then apply a two-source extractor to their outputs.
Public-key Encryption Schemes with Auxiliary Inputs
Yevgeniy Dodis, Shafi Goldwasser, Yael Tauman Kalai, Chris Peikert, Vinod Vaikuntanathan
We construct public-key cryptosystems that remain secure even when
the adversary is given any computationally uninvertible function
of the secret key as auxiliary input (even one that may reveal the secret
key information-theoretically). Our schemes are based on the decisional
Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems.
As an independent technical contribution, we extend the Goldreich-Levin
theorem to provide a hard-core (pseudorandom) value over large
fields.
Public-Key Cryptographic Primitives Provably as Secure as Subset Sum
Vadim Lyubashevsky, Adriana Palacio, Gil Segev
We propose a semantically-secure public-key encryption scheme whose
security is polynomial-time equivalent to the hardness of solving random
instances of the subset sum problem. The subset sum assumption required
for the security of our scheme is weaker than that of existing subset-sum
based encryption schemes, namely the lattice-based schemes of Ajtai and
Dwork (STOC'97), Regev (STOC'03, STOC'05), and Peikert (STOC'09).
Additionally, our proof of security is simple and direct. We also present
a natural variant of our scheme that is secure against key-leakage
attacks, and an oblivious transfer protocol that is secure against
semi-honest adversaries.
Rationality in the Full-Information Model
Ronen Gradwohl
We study rationality in protocol design for the full-information model, a
model characterized by computationally unbounded adversaries, no private
communication, and no simultaneity within rounds. Assuming that players
derive some utility from the outcomes of an interaction, we wish to
design protocols that are faithful: following the protocol should be an
optimal strategy for every player, for various definitions of «optimal»
and under various assumptions about the behavior of others and the
presence, size, and incentives of coalitions. We first focus on leader
election for players who only care about whether or not they are elected.
We seek protocols that are both faithful and resilient, and for some
notions of faithfulness we provide protocols, whereas for others we prove
impossibility results. We then proceed to random sampling, in which the
aim is for the players to jointly sample from a set of $m$ items with a
distribution that is a function of players' preferences over them. We
construct protocols for $m\geq 3$ that are faithful and resilient when
players are single-minded. We also show that there are no such protocols
for 2 items or for complex preferences.
Efficient Rational Secret Sharing in Standard Communication Networks
Georg Fuchsbauer, Jonathan Katz, David Naccache
We propose a new methodology for rational secret sharing leading to
various instantiations (in both the two-party and multi-party settings)
that are simple and efficient in terms of computation, share size, and
round complexity. Our protocols do not require physical assumptions or
simultaneous channels, and can even be run over asynchronous,
point-to-point networks.
We also propose new equilibrium notions (namely, computational versions
of strict Nash equilibrium and stability with respect to
trembles) and prove that our protocols satisfy them. These notions
guarantee, roughly speaking, that at each point in the protocol there is
a unique legal message a party can send. This, in turn, ensures
that protocol messages cannot be used as subliminal channels, something
achieved in prior work only by making strong assumptions on the
communication network.
Bounds on the Sample Complexity for Private Learning and Private Data Release
Amos Beimel, Shiva Kasiviswanathan, Kobbi Nissim
Learning is a task that generalizes many of the analyses that are applied
to collections of data, and in particular, collections of sensitive
individual information. Hence, it is natural to ask what can be learned
while preserving individual privacy. [Kasiviswanathan, Lee, Nissim,
Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They
formalized the notion of private learning, as a combination of PAC
learning and differential privacy, and investigated what concept classes
can be learned privately. Somewhat surprisingly, they showed that,
ignoring time complexity, every PAC learning task could be performed
privately with polynomially many samples, and in many natural cases this
could even be done in polynomial time.
While these results seem to equate non-private and private learning,
there is still a significant gap: the sample complexity of (non-private)
PAC learning is crisply characterized in terms of the VC-dimension of the
concept class, whereas this relationship is lost in the constructions of
private learners, which exhibit, generally, a higher sample complexity.
Looking into this gap, we examine several private learning tasks and give
tight bounds on their sample complexity. In particular, we show strong
separations between sample complexities of proper and improper private
learners (such separation does not exist for non-private learners), and
between sample complexities of efficient and inefficient proper private
learners. Our results show that VC-dimension is not the right measure for
characterizing the sample complexity of proper private learning.
We also examine the task of private data release (as initiated by
[Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the
sample complexity. Our results show that the logarithmic dependence on
size of the instance space is essential for private data release.
New Techniques for Dual System
Encryption and Fully Secure HIBE with Short Ciphertexts
Allison B. Lewko, Brent Waters
We construct a fully secure HIBE scheme with short ciphertexts. The
previous construction of Boneh, Boyen, and Goh was only proven to be
secure in the selective model, under a non-static assumption which
depended on the depth of the hierarchy. To obtain full security, we
apply the dual system encryption concept recently introduced by Waters. A
straightforward application of this technique is insufficient to achieve
short ciphertexts, since the original instantiation of the technique
includes tags that do not compress. To overcome this challenge, we design
a new method for realizing dual system encryption. We provide a system in
composite order groups (of three primes) and prove the security of our
scheme under three static assumptions.
Robust Encryption
Michel Abdalla, Mihir Bellare, Gregory Neven
We provide a provable-security treatment of «robust» encryption.
Robustness means it is hard to produce a ciphertext that is valid for two
different users. Robustness makes explicit a property that has been
implicitly assumed in the past. We argue that it is an essential conjunct
of anonymous encryption. We show that natural anonymity-preserving ways
to achieve it, such as adding recipient identification information before
encrypting, fail. We provide transforms that do achieve it, efficiently
and provably. We assess the robustness of specific encryption schemes in
the literature, providing simple patches for some that lack the property.
We present various applications. Our work enables safer and simpler use
of encryption.
Secure Computation and Its Diverse Applications
Yuval Ishai
Secure multiparty computation (MPC) allows two or more parties to perform
a joint distributed computation without revealing their secrets to each
other. While MPC has traditionally been viewed as an ends rather than a
means, in recent years we have seen a growing number of unexpected
applications of MPC and connections with problems from other domains.
In this talk we will survey several of these connections and highlight
some research directions which they motivate. In particular, we will
discuss the following connections:
- MPC and locally decodable codes. How can the secrecy property of MPC
protocols be useful for reliable and efficient access to data?
- MPC and the parallel complexity of cryptography. How can progress
on the round complexity of MPC lead to better parallel implementations
of one-way functions and other cryptographic primitives?
- MPC and private circuits. How can MPC be used to protect
cryptographic hardware against side-channel attacks?
- MPC in the head. How can MPC protocols which assume an honest
majority be useful in the context of two-party cryptography?
Concise Mercurial Vector Commitments and Independent Zero-Knowledge Sets with Short Proofs
Benoit Libert, Moti Yung
Introduced by Micali, Rabin and Kilian (MRK), the basic primitive of
zero-knowledge sets (ZKS) allows a prover to commit to a secret set $S$
so as to be able to prove statements such as $x \in S$ or $x\not\in S$.
Chase et al. showed that ZKS protocols are underlain by a
cryptographic primitive termed mercurial commitment. A (trapdoor)
mercurial commitment has two commitment procedures. At committing time,
the committer can choose not to commit to a specific message and rather
generate a dummy value which it will be able to softly open to any
message without being able to completely open it. Hard commitments, on
the other hand, can be hardly or softly opened to only one specific
message. At Eurocrypt 2008, Catalano, Fiore and Messina (CFM) introduced
an extension called trapdoor $q$-mercurial commitment (qTMC), which
allows committing to a vector of $q$ messages. These qTMC schemes are
interesting since their openings w.r.t. specific vector positions can be
short (ideally, the opening length should not depend on $q$), which
provides zero-knowledge sets with much shorter proofs when such a
commitment is combined with a Merkle tree of arity $q$. The CFM
construction notably features short proofs of non-membership as it
makes use of a qTMC scheme with short soft openings. A problem left open
is that hard openings still have size $O(q)$, which prevents proofs of
membership from being as compact as those of non-membership. In this
paper, we solve this open problem and describe a new qTMC scheme where
hard and short position-wise openings, both, have constant size.
We then show how our scheme is amenable to constructing independent
zero-knowledge sets (i.e., ZKS's that prevent adversaries from
correlating their set to the sets of honest provers, as defined by
Gennaro and Micali). Our solution retains the short proof property for
this important primitive as well.
Eye for an Eye: Efficient Concurrent Zero-Knowledge in the Timing Model
Rafael Pass, Wei-Lung Dustin Tseng, Muthuramakrishnan Venkitasubramaniam
We present new and efficient concurrent zero-knowledge protocols in the
timing model. In contrast to earlier works---which through
artificially-imposed delays require every protocol execution to
run at the speed of the slowest link in the network---our
protocols essentially only delay messages based on the actual
response time of each verifier (which can be significantly smaller).
Efficiency Preserving Transformations for Concurrent Non-Malleable Zero-Knowledge
Rafail Ostrovsky, Omkant Pandey, Ivan Visconti
Ever since the invention of Zero-Knowledge by Goldwasser, Micali, and
Rackoff [GMR85], Zero-Knowledge has become a central
building block in cryptography - with numerous applications, ranging from
electronic cash to digital signatures. The properties of Zero-Knowledge
range from the most simple (and not particularly useful in practice)
requirements, such as honest-verifier zero-knowledge to the most
demanding (and most useful in applications) such as non-malleable and
concurrent zero-knowledge. In this paper, we study the complexity of
efficient zero-knowledge reductions, from the first type to the
second type. More precisely, under a standard complexity assumption
(DDH), on input a public-coin honest-verifier statistical zero knowledge
argument of knowledge $\pi'$ for a language $L$ we show a compiler that
produces an argument system $\pi$ for $L$ that is concurrent
non-malleable zero-knowledge (under non-adaptive inputs -- which is the
best one can hope to achieve [Lindell03,Lindell04]). If
$\kappa$ is the security parameter, the overhead of our compiler is as
follows:
- The round complexity of $\pi$ is $r+\tilde{O}(\log\kappa)$ rounds,
where $r$ is the round complexity of $\pi'$.
- The new prover $\mathcal{P}$ (resp., the new verifier
$\mathcal{V}$) incurs an additional overhead of (at most)
$r+\kappa\cdot\tilde{O}(\log^2\kappa)$
modular exponentiations. If tags of length $\tilde{O}(\log\kappa)$ are
provided, the overhead is only $r+\tilde{O}(\log^2\kappa)$ modular exponentiations.
The only previous concurrent non-malleable zero-knowledge (under
non-adaptive inputs) was achieved by Barak, Prabhakaran and Sahai
[BPS06]. Their construction, however, mainly focuses on a
feasibility result rather than efficiency, and requires expensive
$NP$-reductions.
Efficiency Limitations for Sigma-Protocols for Group Homomorphisms
Endre Bangerter, Jan Camenisch, Stephan Krenn
Efficient zero-knowledge proofs of knowledge for group homomorphisms are
essential for numerous systems in applied cryptography. Especially,
$\Sigma$-protocols for proving knowledge of discrete logarithms in known
and hidden order groups are of prime importance. Yet, while these proofs
can be performed very efficiently within groups of known order, for
hidden order groups the respective proofs are far less efficient.
This paper shows strong evidence that this efficiency gap cannot be
bridged. Namely, while there are efficient protocols allowing a prover to
cheat only with negligibly small probability in the case of known order
groups, we provide strong evidence that for hidden order groups this
probability is bounded below by $1/2$ for all efficient
$\Sigma$-protocols not using common reference strings or the like.
We prove our results for a comprehensive class of $\Sigma$-protocols in
the generic group model, and further strengthen them by investigating
certain instantiations in the plain model.
Composition of Zero-Knowledge Proofs with Efficient Provers
Eleanor Birrell, Salil Vadhan
We revisit the composability of different forms of zero-knowledge proofs
when the honest prover strategy is restricted to be polynomial time
(given an appropriate auxiliary input). Our results are:
- When restricted to efficient provers, the original
Goldwasser--Micali--Rackoff (GMR) definition of zero knowledge (STOC
`85), here called plain zero knowledge, is closed under a
constant number of sequential compositions (on the same input). This
contrasts with the case of unbounded provers, where Goldreich and
Krawczyk (ICALP `90, SICOMP `96) exhibited a protocol that is zero
knowledge under the GMR definition, but for which the sequential
composition of 2 copies is not zero knowledge.
- If we relax the GMR definition to only require that the simulation
is indistinguishable from the verifier's view by uniform
polynomial-time distinguishers, with no auxiliary input beyond the
statement being proven, then again zero knowledge is not closed under
sequential composition of 2 copies.
- We show that auxiliary-input zero knowledge with efficient provers
is not closed under parallel composition of 2 copies under the
assumption that there is a secure key agreement protocol (in which it
is easy to recognize valid transcripts). Feige and Shamir (STOC `90)
gave similar results under the seemingly incomparable assumptions that
(a) the discrete logarithm problem is hard, or (b) $\cal{UP}\not\subseteq
\cal{BPP}$ and one-way functions exist.
Private Coins versus Public Coins in Zero-Knowledge Proof Systems
Rafael Pass, Muthuramakrishnan Venkitasubramaniam
Goldreich-Krawczyk (Siam J of Comp'96) showed that only languages in
$BPP$ have constant-round public-coin black-box zero-know-ledge
protocols. We extend their lower bound to «fully black-box»
private-coin protocols based on one-way functions. More precisely,
we show that only languages in $BPP^{Sam}$---where $Sam$ is a
«collision-finding» oracle in analogy with Simon (Eurocrypt'98) and
Haitner et. al (FOCS'07)---can have constant-round fully black-box
zero-knowledge proofs; the same holds for constant-round fully black-box
zero-knowledge arguments with sublinear verifier communication
complexity. We also establish near-linear lower bounds on the round
complexity of fully black-box concurrent zero-knowledge proofs (or
arguments with sublinear verifier communication) for languages outside
$BPP^{Sam}$.
The technique used to establish these results is a transformation from
private-coin protocols into $Sam$-relativized public-coin protocols; for
the case of fully black-box protocols based on one-way functions, this
transformation preserves zero knowledge, round complexity and
communication complexity.