CryptoDB
Shuichi Katsumata
Publications
Year
Venue
Title
2024
PKC
Breaking Parallel ROS: Implication for Isogeny and Lattice-based Blind Signatures
Abstract
Many of the three-round blind signatures based on identification protocols are only proven to be $\ell$-concurrently unforgeable for $\ell = \polylog(\secpar)$. It was only recently shown in a seminal work by Benhamouda et al.~(EUROCRYPT'21) that this is not just a limitation of the proof technique. They proposed an elegant polynomial time attack against the $\ell$-concurrently unforgeability of the classical blind Schnorr protocol for $\ell = \poly(\secpar)$. However, there are still many blind signatures following a similar recipe to blind Schnorr where the attack by Benhamouda et al. does not apply. This includes for instance the isogeny-based blind signature CSI-Otter by Katsumata et al (CRYPTO'23), the lattice-based blind signatures BLAZE+ by Alkeilani et al (ACISP'20) and BlindOR by Alkeilani et al (CANS'20).
In this work, we provide a simple and novel attack on blind signatures based on identification protocols performing \emph{parallel repetition} to reduce the soundness error. Our attack translates to a polynomial time break for the $\ell$-concurrent unforgeability of CSI-Otter, BLAZE+, and BlindOR for $\ell = \poly(\secpar)$. More formally, we define an intermediate problem called Parallel Random inhomogeneities in an Overdetermined Solvable system of linear equations ($\pROS$) problem and show that an attack against $\pROS$ implies an attack to the above blind signatures. One takeaway of our finding is that while parallel repetition allows to exponentially reduce the soundness error of an identification protocol, this has minimal effect on the resulting blind signature.Our attack is concretely very efficient and for instance breaks 4-concurrent unforgeability of CSI-Otter in time roughly 2^{34} hash computations.
2024
EUROCRYPT
Threshold Raccoon: Practical Threshold Signatures from Standard Lattice Assumptions
Abstract
Threshold signatures improve both availability and security of digital signatures by splitting the signing key into $N$ shares handed out to different parties. Later on, any subset of at least $T$ parties can cooperate to produce a signature on a given message. While threshold signatures have been extensively studied in the pre-quantum setting, they remain sparse from quantum-resilient assumptions.
We present the first efficient lattice-based threshold signatures with signature size 13~KiB and communication cost 40~KiB per user, supporting a threshold size as large as 1024~signers. We provide an accompanying high performance implementation. The security of the scheme is based on the same assumptions as Dilithium, a signature recently selected by NIST for standardisation which, as far as we know, cannot easily be made threshold efficiently.
All operations used during signing are due to symmetric primitives and simple lattice operations; in particular our scheme does not need heavy tools such as threshold fully homomorphic encryption or homomorphic trapdoor commitments as in prior constructions. The key technical idea is to use _one-time additive masks_ to mitigate the leakage of the partial signing keys through partial signatures.
2024
CRYPTO
Adaptively Secure 5 Round Threshold Signatures from MLWE/MSIS and DL with Rewinding
Abstract
T-out-of-N threshold signatures have recently seen a renewed interest, with various types now available, each offering different tradeoffs. However, one property that has remained elusive is \emph{adaptive} security. When we target thresholdizing existing efficient signatures schemes based on the Fiat-Shamir paradigm such as Schnorr, the elusive nature becomes clear. This class of signature schemes typically rely on the forking lemma to prove unforgeability. That is, an adversary is \emph{rewound and run twice} within the security game. Such a proof is at odds with adaptive security, as the reduction must be ready to answer $2(T - 1)$ secret key shares in total, implying that it can reconstruct the full secret key. Indeed, prior works either assumed strong idealized models such as the algebraic group model (AGM) or modified the underlying signature scheme so as not to rely on rewinding based proofs.
In this work, we propose a new proof technique to construct adaptively secure threshold signatures for existing rewinding-based Fiat-Shamir signatures. As a result, we obtain the following:
1. The first adaptively secure 5 round lattice-based threshold signature under the MLWE and MSIS assumptions in the ROM. The resulting signature is a standard signature of Raccoon, a lattice-based signature scheme by del Pino et al., submitted to the additional NIST call for proposals.
2. The first adaptively secure 5 round threshold signature under the DL assumption in the ROM. The resulting signature is a standard Schnorr signature. To the best of our knowledge, this is the first adaptively secure threshold signature based on DL even assuming stronger models like AGM.
Our work is inspired by the recent statically secure lattice-based 3 round threshold signature by del Pino et al. (Eurocrypt~2024) based on Raccoon. While they relied on so-called one-time additive masks to solve lattice specific issues, we notice that these masks can also be a useful tool to achieve adaptive security. At a very high level, we use these masks throughout the signing protocol to carefully control the information the adversary can learn from the signing transcripts. Intuitively, this allows the reduction to return a total of $2(T-1)$ \emph{randomly sampled} secret key shares to the adversary consistently and without being detected, resolving the above paradoxical situation. Lastly, by allowing the parties to maintain a simple state, we can compress our 5 round schemes into 4 rounds.
2024
CRYPTO
Two-Round Threshold Signature from Algebraic One-More Learning with Errors
Abstract
Threshold signatures have recently seen a renewed interest due to applications in cryptocurrency while NIST has released a call for multi-party threshold schemes, with a deadline for submission expected for the first half of 2025. So far, all lattice-based threshold signatures requiring less than two-rounds are based on heavy tools such as (fully) homomorphic encryption (FHE) and homomorphic trapdoor commitments (HTDC). This is not unexpected considering that most efficient two-round signatures from classical assumptions either rely on idealized model such as algebraic group models or on one-more type assumptions, none of which we have a nice analogue in the lattice world.
In this work, we construct the first efficient two-round lattice-based threshold signature without relying on FHE or HTDC. It has an offline- online feature where the first round can be reprocessed without knowing message or the signer sets, effectively making the signing phase non-interactive. The signature size is small and shows great scalability. For example, even for a threshold as large as 1024 signers, we achieve a signature size roughly 11 KB. At the heart of our construction is a new lattice-based assumption called the algebraic one-more learning with errors (AOM-MLWE) assumption. We believe this to be a strong inclusion to our lattice toolkits with an independent interest. We establish the selective security of AOM-MLWE based on the standard MLWE and MSIS assumptions, and provide an in depth analysis of its adaptive security, which our threshold signature is based on.
2024
CRYPTO
Raccoon: A Masking-Friendly Signature Proven in the Probing Model
Abstract
This paper present Raccoon, a lattice-based signature scheme submitted to the NIST 2022 call for additional post-quantum signatures. Raccoon has the specificity of always being masked. Concretely, all sensitive intermediate values are shared into $d$ parts. The main design rationale of Raccoon is to be easy to mask at high orders, and this dictated most of its design choices, such as the introduction of new algorithmic techniques for sampling small errors. As a result, Raccoon achieves a masking overhead $O(d \log d)$ that compares favorably with the overheads $O(d^2 \log q)$ observed when masking standard lattice signatures.
In addition, we formally prove the security of Raccoon in the $t$-probing model: an attacker is able to probe $t <d-1$ shares during each execution of the main algorithms (key generation, signing, verification). While for most cryptographic schemes, the black-box security $t$-probing security can be studied in isolation, in Raccoon this analysis is performed jointly.
To that end, a bridge must be made between the black-box game-based EUF-CMA proof and the usual simulation proofs of the ISW model (CRYPTO 2003). In this paper, we formalize an end-to-end masking proof by deploying the probing EUF-CMA introduced by Barthe et al. (Eurocrypt 2018) and exhibiting the simulators of the non-interference properties (Barthe et al. CCS 2016). More precisely, the proof is divided into three novel parts:
- a simulation proof in the ISW model that allows to propagate the dependancy to a restricted number of inputs and random coins,
- a game-based proof showing that the security of Raccoon with probes can be reduced to an instance of Raccoon with smaller parameters,
- a parameter study to ensure that the smaller instance is secure, using a robust generalization of the Rényi divergence.
While we apply our techniques to Raccoon, we expect that the algorithmic and proof techniques we introduce will be helpful for the design and analysis of future masking-friendly schemes.
2024
TCC
Tighter Adaptive IBEs and VRFs: Revisiting Waters’ Artificial Abort
Abstract
One of the most popular techniques to prove adaptive security of identity-based encryptions (IBE) and verifiable random functions (VRF) is the _partitioning technique_. Currently, there are only two methods to relate the adversary's advantage and runtime (\epsilon, T) to those of the reduction's (\epsilon_proof, T_proof) using this technique: One originates to Waters (Eurocrypt 2005) who introduced the famous _artificial abort_ step to prove his IBE, achieving (\epsilon_proof, T_proof) = (O(\epsilon/Q), T+O(Q^2/\epsilon^2)), where Q is the number of key queries. Bellare and Ristenpart (Eurocrypt 2009) provide an alternative analysis for the same scheme removing the artificial abort step, resulting in (\epsilon_proof, T_proof) = (O(\epsilon^2/Q), T+O(Q)). Importantly, the current reductions all loose quadratically in \epsilon.
In this paper, we revisit this two decade old problem and analyze proofs based on the partitioning technique through a new lens. For instance, the Waters IBE can now be proven secure with (\epsilon_proof, T_proof) = (O(\epsilon^{3/2}/Q), T+O(Q)), breaking the quadratic dependence on \epsilon. At the core of our improvement is a finer estimation of the failing probability of the reduction in Waters' original proof relying on artificial abort. We use Bonferroni's inequality, a tunable inequality obtained by cutting off higher order terms from the equality derived by the inclusion-exclusion principle.
Our analysis not only improves the reduction of known constructions but also opens the door for new constructions. While a similar improvement to Waters IBE is possible for the lattice-based IBE by Agrawal, Boneh, and Boyen (Eurocrypt 2010), we can slightly tweak the so-called partitioning function in their construction, achieving (\epsilon_proof, T_proof) = (O(\epsilon/Q), T+O(Q)). This is a much better reduction than the previously known (O(\epsilon^3/Q^2), T+O(Q)). We also propose the first VRF with proof and verification key sizes sublinear in the security parameter under the standard d-LIN assumption, while simultaneously improving the reduction cost compared to all prior constructions.
2023
CRYPTO
CSI-Otter: Isogeny-based (Partially) Blind Signatures from the Class Group Action with a Twist
Abstract
In this paper, we construct the first provably-secure isogeny-based (partially) blind signature scheme. While at a high level the scheme resembles the Schnorr blind signature, our work does not directly follow from that construction, since isogenies do not offer as rich an algebraic structure. Specifically, our protocol does not fit into the linear identification protocol abstraction introduced by Hauck, Kiltz, and Loss (EUROCYRPT’19), which was used to generically construct Schnorr-like blind signatures based on modules such as classical groups and lattices. Consequently, our scheme does not seem susceptible to the recent efficient ROS attack exploiting the linear nature of the underlying mathematical tool.
In more detail, our blind signature exploits the quadratic twist of an elliptic curve in an essential way to endow isogenies with a strictly richer structure than abstract group actions (but still more restrictive than modules). The basic scheme has public key size 128 B and signature size 8 KB under the CSIDH-512 parameter sets—these are the smallest among all provably secure post-quantum secure blind signatures. Relying on a new ring variant of the group action inverse problem (rGAIP), we can halve the signature size to 4 KB while increasing the public key size to 512 B. We provide preliminary cryptanalysis of rGAIP and show that for certain parameter settings, it is essentially as secure as the standard GAIP. Finally, we show a novel way to turn our blind signature into a partially blind signature, where we deviate from prior methods since they require hashing into the set of public keys while hiding the corresponding secret key—constructing such a hash function in the isogeny setting remains an open problem.
2023
ASIACRYPT
Practical Round-Optimal Blind Signatures in the ROM from Standard Assumptions
Abstract
Blind signatures serve as a foundational tool for privacy-preserving applications and have recently seen renewed interest due to new applications in blockchains and privacy-authentication tokens. With this, constructing practical round-optimal (i.e., signing consists of the minimum two rounds) blind signatures in the random oracle model (ROM) has been an active area of research, where several impossibility results indicate that either the ROM or a trusted setup is inherent.
In this work, we present two round-optimal blind signatures under standard assumptions in the ROM with different approaches: one achieves the smallest sum of the signature and communication sizes, while the other achieves the smallest signature size. Both of our instantiations are based on standard assumptions over asymmetric pairing groups, i.e., CDH, DDH, and/or SXDH. Our first construction is a highly optimized variant of the generic blind signature construction by Fischlin (CRYPTO’06) and has signature and communication sizes 447 B and 303 B, respectively. We progressively weaken the building blocks required by Fischlin
and we result in the first blind signature where the sum of the signature and communication sizes fit below 1 KB based on standard assumptions. Our second construction is a semi-generic construction from a specific class of randomizable signature schemes that admits an all-but-one reduction. The signature size is only 96 B while the communication size is 2.2 KB. This matches the previously known smallest signature size while improving the communication size by several
orders of magnitude. Finally, both of our constructions rely on a (non-black box) fine-grained analysis of the forking lemma that may be of independent interest.
2022
EUROCRYPT
Group Signature and More from Isogenies and Lattices: Generic, Simple, and Efficient
📺
Abstract
We construct an efficient dynamic group signature (or more generally an accountable ring signature) from isogeny and lattice assumptions. Our group signature is based on a simple generic construction that can be instantiated by cryptographically hard group actions such as the CSIDH group action or an MLWE-based group action. The signature is of size $O(¥log N)$, where $N$ is the number of users in the group. Our idea builds on the recent efficient OR-proof by Beullens, Katsumata, and Pintore (Asiacrypt'20), where we efficiently add a proof of valid ciphertext to their OR-proof and further show that the resulting non-interactive zero-knowledge proof system is ¥emph{online extractable}.
Our group signatures satisfy more ideal security properties compared to previously known constructions, while simultaneously having an attractive signature size. The signature size of our isogeny-based construction is an order of magnitude smaller than all previously known post-quantum group signatures (e.g., 6.6 KB for 64 members). In comparison, our lattice-based construction has a larger signature size (e.g., either 126 KB or 89 KB for 64 members depending on the satisfied security property). However, since the $O(¥cdot)$-notation hides a very small constant factor, it remains small even for very large group sizes, say $2^{20}$.
2022
CRYPTO
A New Framework For More Efficient Round-Optimal Lattice-Based (Partially) Blind Signature via Trapdoor Sampling
📺
Abstract
Blind signatures, originally proposed by Chaum (CRYPTO'82), are interactive protocols between a signer and a user, where the user can obtain a signature without revealing the message to be signed. Recently, Hauck et al. (EUROCRYPT'20) observed that all efficient lattice-based blind signatures following the blueprint of the original blind signature by Rukert (ASIACRYPT'10) have a flawed security proof. This puts us in a situation where all known lattice-based blind signatures have at least two of the following drawbacks: heuristic security; 1~MB or more signature size; only supporting bounded polynomially many signatures, or is based on non-standard assumptions.
In this work, we construct the first __round-optimal__ (i.e., two-round) lattice-based blind signature with a signature size roughly 100~KB that supports unbounded polynomially many signatures and is provably secure under standard assumptions. Even if we allow non-standard assumptions and more rounds, ours provide the shortest signature size while also supporting unbounded polynomially many signatures. The main idea of our work is revisiting the generic blind signature construction by Fischlin (CRYPTO'06) and optimizing the __commit-then-open__ proof using techniques tailored to lattices. Our blind signature is also the first construction to have a formal security proof in the __quantum__ random oracle model. Finally, our blind signature extends naturally to __partially__ blind signatures, where the user and signer can include an agreed-upon public string in the message.
2022
JOFC
An Efficient and Generic Construction for Signal’s Handshake (X3DH): Post-quantum, State Leakage Secure, and Deniable
Abstract
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt’19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited. In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior works on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes. Specifically, this results in the first post-quantum secure replacement of the X3DH protocol based on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to progressively strengthen it using ring signatures and/or non-interactive zero-knowledge proof systems. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and compare the resulting bandwidth and computation performances. Our implementation is publicly available.
2021
EUROCRYPT
Round-Optimal Blind Signatures in the Plain Model from Classical and Quantum Standard Assumptions
📺
Abstract
Blind signatures, introduced by Chaum (Crypto'82), allows a user to obtain a signature on a message without revealing the message itself to the signer. Thus far, all existing constructions of round-optimal blind signatures are known to require one of the following: a trusted setup, an interactive assumption, or complexity leveraging. This state-of-the-affair is somewhat justified by the few known impossibility results on constructions of round-optimal blind signatures in the plain model (i.e., without trusted setup) from standard assumptions. However, since all of these impossibility results only hold \emph{under some conditions}, fully (dis)proving the existence of such round-optimal blind signatures has remained open.
In this work, we provide an affirmative answer to this problem and construct the first round-optimal blind signature scheme in the plain model from standard polynomial-time assumptions. Our construction is based on various standard cryptographic primitives and also on new primitives that we introduce in this work, all of which are instantiable from __classical and post-quantum__ standard polynomial-time assumptions. The main building block of our scheme is a new primitive called a blind-signature-conforming zero-knowledge (ZK) argument system. The distinguishing feature is that the ZK property holds by using a quantum polynomial-time simulator against non-uniform classical polynomial-time adversaries.
Syntactically one can view this as a delayed-input three-move ZK argument with a reusable first message, and we believe it would be of independent interest.
2021
PKC
An Efficient and Generic Construction for Signal's Handshake (X3DH): Post-Quantum, State Leakage Secure, and Deniable
📺
Abstract
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt'19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited.
In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior work on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes.
Specifically, this results in the first post-quantum secure replacement of the X3DH protocol on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to strengthen it using ring signatures. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and compare the resulting bandwidth and computation performances. Our implementation is publicly available.
2021
CRYPTO
A New Simple Technique to Bootstrap Various Lattice Zero-Knowledge Proofs to QROM Secure NIZKs
📺
Abstract
Many of the recent advanced lattice-based Sigma-/public-coin honest verifier (HVZK) interactive protocols based on the techniques developed by Lyubashevsky (Asiacrypt'09, Eurocrypt'12) can be transformed into a non-interactive zero-knowledge (NIZK) proof in the random oracle model (ROM) using the Fiat-Shamir transform. Unfortunately, although they are known to be secure in the __classical__ ROM, existing proof techniques are incapable of proving them secure in the __quantum__ ROM (QROM). Alternatively, while we could instead rely on the Unruh transform (Eurocrypt'15), the resulting QROM secure NIZK will incur a large overhead compared to the underlying interactive protocol.
In this paper, we present a new simple semi-generic transform that compiles many existing lattice-based Sigma-/public-coin HVZK interactive protocols into QROM secure NIZKs. Our transform builds on a new primitive called __extractable linear homomorphic commitment__ protocol. The resulting NIZK has several appealing features: it is not only a proof of knowledge but also straight-line extractable; the proof overhead is smaller compared to the Unruh transform; it enjoys a relatively small reduction loss; and it requires minimal background on quantum computation. To illustrate the generality of our technique, we show how to transform the recent Bootle et al.'s 5-round protocol with an exact sound proof (Crypto'19) into a QROM secure NIZK by increasing the proof size by a factor of 2.6. This compares favorably to the Unruh transform that requires a factor of more than 50.
2021
TCC
Statistical ZAPs from Group-Based Assumptions
📺
Abstract
We put forth a template for constructing statistical ZAPs for NP. Our template compiles NIZKs for NP in the hidden bit model (which exist unconditionally) into statistical ZAPs using a new notion of interactive hidden-bit generator (IHBG), which adapts the notion of hidden-bit generator to the plain model by building upon the recent notion of statistically-hiding extractable commitments. We provide a construction of IHBG from the explicit hardness of the decision Diffie-Hellman assumption (where explicit refers to requiring an explicit upper bound on the advantage of any polynomial-time adversary against the assumption) and the existence of statistical ZAPs for a specific simple language, building upon the recent construction of dual-mode hidden-bit generator from (Libert et al., EUROCRYPT 2020). We provide two instantiations of the underlying simple ZAP:
1. Using the recent statistical ZAP for the Diffie-Hellman language of (Couteau and Hartmann, CRYPTO 2020), we obtain statistical ZAPs for NP assuming (the explicit hardness of) DDH in $G_1$ and kernel-DH in $G_2$ (a search assumption which is weaker than DDH), where $(G_1,G_2)$ are groups equipped with an asymmetric pairing. This improves over the recent work of (Lombardi et al., EUROCRYPT 2020) which achieved a relaxed variant of statistical ZAP for NP, under a stronger assumption.
2. Using the recent work of (Couteau et al., EUROCRYPT 2020), we obtain statistical ZAPs for NP assuming the explicit hardness of DDH, together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\secpar$ with probability better than $\poly(\secpar)/2^{(c + o(1)) \cdot \secpar}$, denoted $2^{-c\secpar}$-\OWKDM, for a constant c = 1/2, in pairing-free groups.
Note that the latter is a search discrete-log-style falsifiable assumption, incomparable to DDH (in particular, it is not known to imply public-key encryption).
2021
JOFC
Compact Designated Verifier NIZKs from the CDH Assumption Without Pairings
Abstract
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. A useful relaxation of NIZK is a designated verifier NIZK (DV-NIZK) proof, where proofs are verifiable only by a designated party in possession of a verification key. A crucial security requirement of DV-NIZKs is unbounded-soundness, which guarantees soundness even if the verification key is reused for multiple statements. Most known DV-NIZKs (except standard NIZKs) for $$\mathbf{NP} $$ NP do not have unbounded-soundness. Existing DV-NIZKs for $$\mathbf{NP} $$ NP satisfying unbounded-soundness are based on assumptions which are already known to imply standard NIZKs. In particular, it is an open problem to construct (DV-)NIZKs from weak paring-free group assumptions such as decisional Diffie–Hellman (DH). As a further matter, all constructions of (DV-)NIZKs from DH type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$ | C | · poly ( κ ) , where | C | is the size of the circuit that computes the $$\mathbf{NP} $$ NP relation. In this work, we make progress of constructing DV-NIZKs from DH-type assumptions that are not known to imply standard NIZKs. Our results are summarized as follows: DV-NIZKs for $$\mathbf{NP} $$ NP from the computational DH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO’18). DV-NIZKs for $$\mathbf{NP} $$ NP with proof size $$|C|+\mathsf {poly}(\kappa )$$ | C | + poly ( κ ) from the computational DH assumption over specific pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP} $$ NP relation to be computable in $$\mathbf{NC} ^1$$ NC 1 and assume hardness of a (non-static) falsifiable DH type assumption over specific pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$ | w | + poly ( κ ) .
2021
JOFC
Tighter Security Proofs for GPV-IBE in the Quantum Random Oracle Model
Abstract
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely the learning with errors assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext. In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single- and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz–Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
2020
EUROCRYPT
Compact NIZKs from Standard Assumptions on Bilinear Maps
📺
Abstract
A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM'12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by $O(|C|k)$, where $C$ is the circuit for the NP relation and $k$ is the security parameter.
By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static $q$-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem.
Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in $|C|$, that is, the proof size only grows by $|C| +poly(k)$, based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size.
As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size $|w|poly(k)$ where $|w|$ is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of $|C|$ based on a standard assumption. (2) We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is $|w|poly(k)$ from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO'19), which gave a similar scheme based on a non-static $q$-type assumption.
The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.
2020
EUROCRYPT
Non-Interactive Zero-Knowledge in Pairing-Free Groups from Weaker Assumptions
📺
Abstract
We provide new constructions of non-interactive zero-knowledge arguments (NIZKs) for NP from discrete-logarithm-style assumptions over cyclic groups, without relying on pairings. A previous construction from (Canetti et al., Eurocrypt'18) achieves such NIZKs under the assumption that no efficient adversary can break the key-dependent message (KDM) security of (additive) ElGamal with respect to all (even inefficient) functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{\lambda}$. This is an extremely strong, non-falsifiable assumption. In particular, even mild (polynomial) improvements over the current best known attacks on the discrete logarithm problem would already contradict this assumption. (Canetti et al. STOC'19) describe how to improve the assumption to rely only on KDM security with respect to all efficient functions, therefore obtaining an assumption that is (in spirit) falsifiable.
Our first construction improves this state of affairs. We provide a construction of NIZKs for NP under the CDH assumption together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{c\lambda}$ (denoted $2^{-c\lambda}$-OWKDM), for a constant $c = 3/4$. Unlike the previous assumption, our assumption leaves an exponential gap between the best known attack and the required security guarantee.
We also analyse whether we could build NIZKs when CDH does not hold. As a second contribution, we construct an infinitely often NIZK argument system for NP (where soundness and zero-knowledge are only guaranteed to hold for infinitely many security parameters), under the $2^{-c\lambda}$-OWKDM security of ElGamal with $c = 28/29+o(1)$, together with the existence of low-depth pseudorandom generators.
2020
PKC
Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512
📺
Abstract
Recently, Beullens, Kleinjung, and Vercauteren (Asiacrypt’19) provided the first practical isogeny-based digital signature, obtained from the Fiat-Shamir (FS) paradigm. They worked with the CSIDH-512 parameters and passed through a new record class group computation. However, as with all standard FS signatures, the security proof is highly non-tight and the concrete parameters are set under the heuristic that the only way to attack the scheme is by finding collisions for a hash function. In this paper, we propose an FS-style signature scheme, called Lossy CSI-FiSh, constructed using the CSIDH-512 parameters and with a security proof based on the “Lossy Keys” technique introduced by Kiltz, Lyubashevsky and Schaffner (Eurocrypt’18). Lossy CSI-FiSh is provably secure under the same assumption which underlies the security of the key exchange protocol CSIDH (Castryck et al. (Asiacrypt’18)) and is almost as efficient as CSI-FiSh. For instance, aiming for small signature size, our scheme is expected to take around $$approx 800$$ ms to sign/verify while producing signatures of size $$approx 280$$ bytes. This is only twice slower than CSI-FiSh while having similar signature size for the same parameter set. As an additional benefit, our scheme is by construction secure both in the classical and quantum random oracle model.
2020
CRYPTO
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model
📺
Abstract
Constrained pseudorandom functions (CPRFs) allow learning "constrained" PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate.
First introduced by Boneh and Waters [AC’13], Kiayias et al. [CCS’13] and Boyle et al. [PKC’14], they have shown to be a useful cryptographic primitive with many applications.
These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order.
However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates.
Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates.
In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below.
- We construct adaptively secure and O(1)-collusion-resistant CPRFs for t-conjunctive normal form (t-CNF) predicates from one-way functions (OWFs) where t is a constant. Here, O(1)-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that t-CNF includes bit-fixing predicates as a special case.
- We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include t-CNF predicates for a constant t as a special case. Thus, this construction supports a more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption.
- We construct adaptively secure and O(1)-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO).
The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak 1-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
2020
ASIACRYPT
Adaptively Secure Inner Product Encryption from LWE
📺
Abstract
Attribute-based encryption (ABE) is an advanced form of encryption scheme allowing for access policies to be embedded within the secret keys and ciphertexts. By now, we have ABEs supporting numerous types of policies based on hardness assumptions over bilinear maps and lattices. However, one of the distinguishing differences between ABEs based on these two breeds of assumptions is that the former can achieve adaptive security for quite expressible policies (e.g., inner-products, boolean formula) while the latter can not. Recently, two adaptively secure lattice-based ABEs have appeared and changed the state of affairs: a non-zero inner-product (NIPE) encryption by Katsumata and Yamada (PKC'19) and an ABE for t-CNF policies by Tsabary (CRYPTO'19). However, the policies supported by these ABEs are still quite limited and do not embrace the more interesting policies that have been studied in the literature. Notably, constructing an adaptively secure inner-product encryption (IPE) based on lattices still remains open.
In this work, we propose the first adaptively secure IPE based on the learning with errors (LWE) assumption with sub-exponential modulus size (without resorting to complexity leveraging). Concretely, our IPE supports inner-products over the integers Z with polynomial sized entries and
satisfies adaptively weakly-attribute-hiding security.
We also show how to convert such an IPE to an IPE supporting inner-products over Z_p for a polynomial-sized p and a fuzzy identity-based encryption (FIBE) for small and large universes. Our result builds on the ideas presented in Tsabary (CRYPTO'19), which uses constrained pseudorandom functions (CPRF) in a semi-generic way to achieve adaptively secure ABEs, and the recent lattice-based adaptively secure CPRF for inner-products by Davidson et al. (CRYPTO'20). Our main observation is realizing how to weaken the conforming CPRF property introduced in Tsabary (CRYPTO'19) by taking advantage of the specific linearity property enjoyed by the lattice evaluation algorithms by Boneh et al. (EUROCRYPT'14).
2020
ASIACRYPT
Scalable Ciphertext Compression Techniques for Post-Quantum KEMs and their Applications
📺
Abstract
A multi-recipient key encapsulation mechanism, or mKEM, provides a scalable solution to securely communicating to a large group, and offers savings in both bandwidth and computational cost compared to the trivial solution of communicating with each member individually. All prior works on mKEM are only limited to classical assumptions and, although some generic constructions are known, they all require specific properties that are not shared by most post-quantum schemes.
In this work, we first provide a simple and efficient generic construction of mKEM that can be instantiated from versatile assumptions, including post-quantum ones. We then study these mKEM instantiations at a practical level using 8 post-quantum KEMs (which are lattice and isogeny-based NIST candidates), and CSIDH, and show that compared to the trivial solution, our mKEM offers savings of at least one order of magnitude in the bandwidth, and make encryption time shorter by a factor ranging from 1.92 to 35. Additionally, we show that by combining mKEM with the TreeKEM protocol used by MLS – an IETF draft for secure group messaging – we obtain significant bandwidth savings.
2020
ASIACRYPT
Calamari and Falafl: Logarithmic (Linkable) Ring Signatures from Isogenies and Lattices
📺
Abstract
We construct efficient ring signatures (RS) from isogeny and
lattice assumptions. Our ring signatures are based on a logarithmic OR
proof for group actions. We instantiate this group action by either the
CSIDH group action or an MLWE-based group action to obtain our
isogeny-based or lattice-based RS scheme, respectively. Even though the
OR proof has a binary challenge space and therefore requires a number of
repetitions which is linear in the security parameter, the sizes of our ring
signatures are small and scale better with the ring size N than previously
known post-quantum ring signatures. We also construct linkable ring
signatures (LRS) that are almost as efficient as the non-linkable variants.
The isogeny-based scheme produces signatures whose size is an order of
magnitude smaller than all previously known logarithmic post-quantum
ring signatures, but it is relatively slow (e.g. 5.5 KB signatures and 79
s signing time for rings with 8 members). In comparison, the lattice-based
construction is much faster, but has larger signatures (e.g. 30 KB
signatures and 90 ms signing time for the same ring size). For small ring
sizes our lattice-based ring signatures are slightly larger than state-of-the-
art schemes, but they are smaller for ring sizes larger than N approximately 1024.
2019
PKC
Non-zero Inner Product Encryption Schemes from Various Assumptions: LWE, DDH and DCR
Abstract
In non-zero inner product encryption (NIPE) schemes, ciphertexts and secret keys are associated with vectors and decryption is possible whenever the inner product of these vectors does not equal zero. So far, much effort on constructing bilinear map-based NIPE schemes have been made and this has lead to many efficient schemes. However, the constructions of NIPE schemes without bilinear maps are much less investigated. The only known other NIPE constructions are based on lattices, however, they are all highly inefficient due to the need of converting inner product operations into circuits or branching programs.To remedy our rather poor understanding regarding NIPE schemes without bilinear maps, we provide two methods for constructing NIPE schemes: a direct construction from lattices and a generic construction from schemes for inner products (LinFE). For our first direct construction, it highly departs from the traditional lattice-based constructions and we rely heavily on new tools concerning Gaussian measures over multi-dimensional lattices to prove security. For our second generic construction, using the recent constructions of LinFE schemes as building blocks, we obtain the first NIPE constructions based on the DDH and DCR assumptions. In particular, we obtain the first NIPE schemes without bilinear maps or lattices.
2019
PKC
Lattice-Based Revocable (Hierarchical) IBE with Decryption Key Exposure Resistance
Abstract
Revocable identity-based encryption (RIBE) is an extension of IBE that supports a key revocation mechanism, which is an indispensable feature for practical cryptographic schemes. Due to this extra feature, RIBE is often required to satisfy a strong security notion unique to the revocation setting called decryption key exposure resistance (DKER). Additionally, hierarchal IBE (HIBE) is another orthogonal extension of IBE that supports key delegation functionalities allowing for scalable deployments of cryptographic schemes. So far, R(H)IBE constructions with DKER are only known from bilinear maps, where all constructions rely heavily on the so-called key re-randomization property to achieve the DKER and/or hierarchal feature. Since lattice-based schemes seem to be inherently ill-fit with the key re-randomization property, no construction of lattice-based R(H)IBE schemes with DKER are known.In this paper, we propose the first lattice-based RHIBE scheme with DKER without relying on the key re-randomization property, departing from all the previously known methods. We start our work by providing a generic construction of RIBE schemes with DKER, which uses as building blocks any two-level standard HIBE scheme and (weak) RIBE scheme without DKER. Based on previous lattice-based RIBE constructions without DKER, our result implies the first lattice-based RIBE scheme with DKER. Then, building on top of our generic construction, we construct the first lattice-based RHIBE scheme with DKER, by further exploiting the algebraic structure of lattices. To this end, we prepare a new tool called the level conversion keys, which enables us to achieve the hierarchal feature without relying on the key re-randomization property.
2019
EUROCRYPT
Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions
📺
Abstract
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO’18) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ), where |C| is the size of the circuit that computes the $$\mathbf {NP}$$NP relation.In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows:DV-NIZKs for $$\mathbf {NP}$$NP from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO’18).DP-NIZKs for $$\mathbf {NP}$$NP with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ) rather then an multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions.PP-NIZK for $$\mathbf {NP}$$NP with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO’18).
Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest.
2019
EUROCRYPT
Group Signatures Without NIZK: From Lattices in the Standard Model
Abstract
In a group signature scheme, users can anonymously sign messages on behalf of the group they belong to, yet it is possible to trace the signer when needed. Since the first proposal of lattice-based group signatures in the random oracle model by Gordon, Katz, and Vaikuntanathan (ASIACRYPT 2010), the realization of them in the standard model from lattices has attracted much research interest, however, it has remained unsolved. In this paper, we make progress on this problem by giving the first such construction. Our schemes satisfy CCA-selfless anonymity and full traceability, which are the standard security requirements for group signatures proposed by Bellare, Micciancio, and Warinschi (EUROCRYPT 2003) with a slight relaxation in the anonymity requirement suggested by Camenisch and Groth (SCN 2004). We emphasize that even with this relaxed anonymity requirement, all previous group signature constructions rely on random oracles or NIZKs, where currently NIZKs are not known to be implied from lattice-based assumptions. We propose two constructions that provide tradeoffs regarding the security assumption and efficiency:Our first construction is proven secure assuming the standard LWE and the SIS assumption. The sizes of the public parameters and the signatures grow linearly in the number of users in the system.Our second construction is proven secure assuming the standard LWE and the subexponential hardness of the SIS problem. The sizes of the public parameters and the signatures are independent of the number of users in the system.
Technically, we obtain the above schemes by combining a secret key encryption scheme with additional properties and a special type of attribute-based signature (ABS) scheme, thus bypassing the utilization of NIZKs. More specifically, we introduce the notion of indexed ABS, which is a relaxation of standard ABS. The above two schemes are obtained by instantiating the indexed ABS with different constructions. One is a direct construction we propose and the other is based on previous work.
2019
CRYPTO
Exploring Constructions of Compact NIZKs from Various Assumptions
📺
Abstract
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all $$\mathbf{NP }$$ languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for $$\mathbf{NP }$$ based on falsifiable pairing-based assumptions all require a proof size at least as large as $$O(|C| \kappa )$$, where C is a circuit computing the $$\mathbf{NP }$$ relation and $$\kappa $$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $$O(|C|) + \mathsf {poly}(\kappa )$$, rather than a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$, based on any falsifiable pairing-based assumptions is an open problem.In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $$O(|C|) + \mathsf {poly}(\kappa )$$, and make progress regarding the above situation. Our result is summarized below.
We construct CRS-NIZK for all $$\mathbf{NP }$$ with proof size $$|C| +\mathsf {poly}(\kappa )$$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to $$\mathbf{NP }$$ relations in $$\mathbf{NC }^1$$, the proof size is $$|w| \cdot \mathsf {poly}(\kappa )$$ where w is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (CRYPTO’19) based on lattices.We construct (multi-theorem) DV-NIZKs for $$\mathbf{NP }$$ with proof size $$|C|+\mathsf {poly}(\kappa )$$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP }$$ relation to be computable in $$\mathbf{NC }^1$$ and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$.
Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $$|C|\cdot \mathsf {poly}(\kappa )$$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than |C|, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for $$\mathbf{NP }$$ with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS’18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit C computing the $$\mathbf{NP }$$ relation.Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
2018
PKC
Attribute-Based Signatures for Unbounded Circuits in the ROM and Efficient Instantiations from Lattices
Abstract
Attribute-based signature (ABS), originally introduced by Maji et al. (CT-RSA’11), represents an essential mechanism to allow for fine-grained authentication. A user associated with an attribute x can sign w.r.t. a given public policy C only if his attribute satisfies C, i.e.,
$$C(x)=1$$
C(x)=1. So far, much effort on constructing bilinear map-based ABS schemes have been made, where the state-of-the-art scheme of Sakai et al. (PKC’16) supports the very wide class of unbounded circuits as policies. However, construction of ABS schemes without bilinear maps are less investigated, where it was not until recently that Tsabary (TCC’17) showed a lattice-based ABS scheme supporting bounded circuits as policies, at the cost of weakening the security requirement.In this work, we affirmatively close the gap between ABS schemes based on bilinear maps and lattices by constructing the first lattice-based ABS scheme for unbounded circuits in the random oracle model. We start our work by providing a generic construction of ABS schemes for unbounded-circuits in the rand om oracle model, which in turn implies that one-way functions are sufficient to construct ABS schemes. To prove security, we formalize and prove a generalization of the Forking Lemma, which we call “general multi-forking lemma with oracle access”, capturing the situation where the simulator is interacting with some algorithms he cannot rewind, and also covering many features of the recent lattice-based ZKPs. This, in fact, was a formalization lacking in many existing anonymous signatures from lattices so far (e.g., group signatures). Therefore, this formalization is believed to be of independent interest. Finally, we provide a concrete instantiation of our generic ABS construction from lattices by introducing a new
$$\varSigma $$
Σ-protocol, that highly departs from the previously known techniques, for proving possession of a valid signature of the lattice-based signature scheme of Boyen (PKC’10).
2018
ASIACRYPT
Tighter Security Proofs for GPV-IBE in the Quantum Random Oracle Model
Abstract
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely, the learning with errors (LWE) assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext.In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we also show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz-Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
2018
ASIACRYPT
Attribute-Based Signatures for Unbounded Languages from Standard Assumptions
Abstract
Attribute-based signature (ABS) schemes are advanced signature schemes that simultaneously provide fine-grained authentication while protecting privacy of the signer. Previously known expressive ABS schemes support either the class of deterministic finite automata and circuits from standard assumptions or Turing machines from the existence of indistinguishability obfuscations.In this paper, we propose the first ABS scheme for a very general policy class, all deterministic Turing machines, from a standard assumption, namely, the Symmetric External Diffie-Hellman (SXDH) assumption. We also propose the first ABS scheme that allows nondeterministic finite automata (NFA) to be used as policies. Although the expressiveness of NFAs are more restricted than Turing machines, this is the first scheme that supports nondeterministic computations as policies.Our main idea lies in abstracting ABS constructions and presenting the concept of history of computations; this allows a signer to prove possession of a policy that accepts the string associated to a message in zero-knowledge while also hiding the policy, regardless of the computational model being used. With this abstraction in hand, we are able to construct ABS for Turing machines and NFAs using a surprisingly weak NIZK proof system. Essentially we only require a NIZK proof system for proving that a (normal) signature is valid. Such a NIZK proof system together with a base signature scheme are, in turn, possible from bilinear groups under the SXDH assumption, and hence so are our ABS schemes.
2017
ASIACRYPT
2016
ASIACRYPT
Program Committees
- Eurocrypt 2024
- Asiacrypt 2024
- Crypto 2023
- PKC 2023
- PKC 2022
- PKC 2021
- Asiacrypt 2021
Coauthors
- Nuttapong Attrapadung (1)
- Ward Beullens (2)
- Geoffroy Couteau (2)
- Alex Davidson (1)
- Samuel Dobson (1)
- Thomas Espitau (1)
- Goichiro Hanaoka (2)
- Keitaro Hashimoto (2)
- Ali El Kaafarani (2)
- Shuichi Katsumata (35)
- Kei Kimura (1)
- Kris Kwiatkowski (3)
- Yi-Fu Lai (3)
- Jason T. LeGrow (1)
- Mary Maller (1)
- Takahiro Matsuda (1)
- Fabrice Mouhartem (1)
- Ryo Nishimaki (8)
- Rafael del Pino (2)
- Rafaël Del Pino (1)
- Federico Pintore (4)
- Thomas Prest (5)
- Ling Qin (1)
- Michael Reichle (3)
- Mélissa Rossi (1)
- Markku-Juhani O. Saarinen (1)
- Elahe Sadeghi (1)
- Yusuke Sakai (2)
- Atsushi Takayasu (1)
- Kaoru Takemure (3)
- Bogdan Ursu (2)
- Shota Yamada (14)
- Takashi Yamakawa (10)