IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 March 2025
Michel SECK, Oumar Niang, Djiby Sow
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers $N = pq$, where $p$ and $q$ are prime numbers. The public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$. Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent $e$ and the private exponent $d$ satisfy the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. In this paper, we study the general equation $eu - (p^n - 1)(q^n - 1)v = w$ with positive integers $u$ and $v$, and $w\in \mathbb{Z}$. We show that, given the public parameters $N$ and $e$, one can recover $u$ and $v$ and factor the modulus $N$ in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on $u$, $v$, and $w$. Furthermore, we show that if the private exponent $d$ in an RSA-like cryptosystem is either small or too large, then $N$ can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Marius A. Aardal, Andrea Basso, Luca De Feo, Sikhar Patranabis, Benjamin Wesolowski
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures.
To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Sönke Jendral, Elena Dubrova
Ongoing efforts to transition to post-quantum secure public-
key cryptosystems have created the need for algorithms with a variety of
performance characteristics and security assumptions. Among the can-
didates in NIST’s post-quantum standardisation process for additional
digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-
Head (VOLEitH)-based scheme, whose security relies on the one-wayness
of the Advanced Encryption Standard (AES). The VOLEitH paradigm
enables competitive performance and signature sizes under conservative
security assumptions. However, since it was introduced recently, in 2023,
its resistance to physical attacks has not yet been analysed. In this
paper, we present the first security analysis of VOLEitH-based signa-
ture schemes in the context of side-channel and fault injection attacks.
We demonstrate four practical attacks on a masked implementation of
FAEST in ARM Cortex-M4 capable of recovering the full secret key with
high probability (greater than 0.87) from a single signature. These at-
tacks exploit vulnerabilities of components specific to VOLEitH schemes
and FAEST, such as the all-but-one vector commitments, VOLE gener-
ation, and AES proof generation. Finally, we propose countermeasures
to mitigate these attacks and enhance the physical security of VOLEitH-
based signature schemes.
Han Chen, Tao Huang, Phuong Pham, Shuang Wu
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion.
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Paul Frixons, Valerie Gilchrist, Péter Kutas, Simon-Philipp Merz, Christophe Petit
Cryptographic group actions provide simple post-quantum generalizations to many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the
security of the CSI-SharK threshold signature protocol relies on the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives $g$ and $g^x$, but also $g^{xc}$ for multiple known values of $c$. A natural open question is then whether the extra data provided to the adversary in this variant allows for more efficient attacks. In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm.
We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation.
Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation.
Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
Shweta Agrawal, Anuja Modi, Anshu Yadav, Shota Yamada
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO.
Attacks: Our attacks are summarized as follows. - The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false. - We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition. - The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ($\mathsf{prFE}$) and extended this to obfuscation for pseudorandom functionalities ($\mathsf{prIO}$) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption. - The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption. - Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime.
Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered.
Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Attacks: Our attacks are summarized as follows. - The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false. - We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition. - The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ($\mathsf{prFE}$) and extended this to obfuscation for pseudorandom functionalities ($\mathsf{prIO}$) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption. - The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption. - Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime.
Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered.
Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Nico Dottling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Sanjam Garg, Aarushi Goel, Dimitris Kolonelos, Sina Shiehian, Rohit Sinha
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known.
To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner.
We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner.
We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
Wouter Castryck, Thomas Decru, Péter Kutas, Abel Laval, Christophe Petit, Yan Bo Ti
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over $\overline{\mathbb{F}}_p$ can be represented by a certain type of $2 \times 2$ matrix $g$, having entries in the quaternion algebra $B_{p,\infty}$. We present a heuristic polynomial-time algorithm which, upon input of two such matrices $g_1, g_2$, finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in $B_{p,\infty}$.
The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix $g$ representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus-$2$ curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix $g$ associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting $(2,2)$-isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Aydin Abadi, Mohammad Naseri
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn $t$ out of $n$ private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining $n − t$ messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the $t$ individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant $O(1)$ with respect to $n$. Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Shalini Banerjee, Tapas Pal, Andy Rupp, Daniel Slamanig
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ''dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows to transmit an anamorphic message using a double key to a receiver that can decrypt this message using a double key. From the point of view of the dictator the encryption keys as well as the ciphertexts in the regular and anamorphic mode are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all) and the receiver requires a decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations.
In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE.
Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
In this paper we ask whether we can design such PKAE schemes without such limitations and being closer to PKE schemes used in practice. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than just CCA) secure PKE.
Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Also here we consider security beyond IND-CPA for the anamorphic channel.
Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, Zhusen Liu
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base $p_0$ is small (e.g., \( p_0 < 100 \)). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Amos Beimel, Oriol Farràs, Adriana Moya
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret.
Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids.
Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids.
Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Martin R. Albrecht, Russell W. F. Lai, Oleksandra Lapiha, Ivy K. Y. Woo
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively?
We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors.
We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions.
Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors.
We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions.
Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Amin Abdulrahman, Matthias J. Kannwischer, Thing-Han Lim
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA.
The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance.
However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming.
SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform.
In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance. As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7.
Our results suggest that this approach is promising: For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of $1.97\times$ and $1.69\times$, respectively. For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code. For many other building blocks, we achieve similarly significant speed-ups of up to $2.35\times$. Overall, this results in 11 to 33% faster code for the entire cryptosystems.
In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance. As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7.
Our results suggest that this approach is promising: For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of $1.97\times$ and $1.69\times$, respectively. For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code. For many other building blocks, we achieve similarly significant speed-ups of up to $2.35\times$. Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Joël Alwen, Georg Fuchsbauer, Marta Mularczyk, Doreen Riepel
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM).
In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about $1/2$ and ciphertext sizes around $2/3$ of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about $1/2$ and ciphertext sizes around $2/3$ of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Xuan Thanh Do, Dang Truong Mac, Ky Nguyen, Duong Hieu Phan, Quoc-Huy Vu
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data.
However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold: - $\textbf{Conceptually:}$ We propose the first tracing model in the context of multi-sender encryption, namely $\textit{Traceable Multi-Client Functional Encryption}$ ($\textsf{TMCFE}$), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting. - $\textbf{Technically:}$ To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme $\textsf{TMCFE}$ for inner-product functionalities that achieves post-quantum security under standard assumptions.
However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold: - $\textbf{Conceptually:}$ We propose the first tracing model in the context of multi-sender encryption, namely $\textit{Traceable Multi-Client Functional Encryption}$ ($\textsf{TMCFE}$), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting. - $\textbf{Technically:}$ To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme $\textsf{TMCFE}$ for inner-product functionalities that achieves post-quantum security under standard assumptions.
Haruhisa Kosuge, Keita Xagawa
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization.
In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Jeongsu Kim, Aaram Yun
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated.
In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Yuejun Wang, Baocang Wang, Qiqi Lai, Huaxiong Wang
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality.
First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to $Q$.
Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works.
Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to $Q$.
Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works.
Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.