IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
09 May 2025
Binbin Tu, Yujie Bai, Cong Zhang, Yang Cao, Yu Chen
Private set union (PSU) allows two parties to compute the union of their sets without revealing anything else. It can be categorized into balanced and unbalanced scenarios depending on the size of the set on both sides. Recently, Jia et al. (USENIX Security 2024) highlight that existing scalable PSU solutions suffer from during-execution leakage and propose a PSU with enhanced security for the balanced setting. However, their protocol's complexity is superlinear with the size of the set. Thus, the problem of constructing a linear enhanced PSU remains open, and no unbalanced enhanced PSU exists. In this work, we address these two open problems:
-Balanced case: We propose the first linear enhanced PSU. Compared to the state-of-the-art enhanced PSU (Jia et al., USENIX Security 2024), our protocol achieves a $2.2 - 8.8\times$ reduction in communication cost and a $1.2 - 8.6\times$ speedup in running time, depending on set sizes and network environments.
-Unbalanced case: We present the first unbalanced enhanced PSU, which achieves sublinear communication complexity in the size of the large set. Experimental results demonstrate that the larger the difference between the two set sizes, the better our protocol performs. For unbalanced set sizes $(2^{10},2^{20})$ with single thread in $1$Mbps bandwidth, our protocol requires only $2.322$ MB of communication. Compared with the state-of-the-art enhanced PSU, there is $38.1\times$ shrink in communication and roughly $17.6\times$ speedup in the running time.
-Balanced case: We propose the first linear enhanced PSU. Compared to the state-of-the-art enhanced PSU (Jia et al., USENIX Security 2024), our protocol achieves a $2.2 - 8.8\times$ reduction in communication cost and a $1.2 - 8.6\times$ speedup in running time, depending on set sizes and network environments.
-Unbalanced case: We present the first unbalanced enhanced PSU, which achieves sublinear communication complexity in the size of the large set. Experimental results demonstrate that the larger the difference between the two set sizes, the better our protocol performs. For unbalanced set sizes $(2^{10},2^{20})$ with single thread in $1$Mbps bandwidth, our protocol requires only $2.322$ MB of communication. Compared with the state-of-the-art enhanced PSU, there is $38.1\times$ shrink in communication and roughly $17.6\times$ speedup in the running time.
Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Rachid Guerraoui, Jovan Komatovic, Manuel Vidigueira
Byzantine Agreement (BA) allows $n$ processes to propose input values to reach consensus on a common, valid $L_o$-bit value, even in the presence of up to $t < n$ faulty processes that can deviate arbitrarily from the protocol. Although strategies like randomization, adaptiveness, and batching have been extensively explored to mitigate the inherent limitations of one-shot agreement tasks, there has been limited progress on achieving good amortized performance for multi-shot agreement, despite its obvious relevance to long-lived functionalities such as state machine replication.
Observing that a weak form of accountability suffices to identify and exclude malicious processes, we propose new efficient and deterministic multi-shot agreement protocols for multi-value validated Byzantine agreement (MVBA) with a strong unanimity validity property (SMVBA) and interactive consistency (IC). Specifically, let $\kappa$ represent the size of the cryptographic objects needed to solve Byzantine agreement when $n<3t$. We achieve both IC and SMVBA with $O(1)$ amortized latency, with a bounded number of slower instances. The SMVBA protocol has $O(nL_o +n\kappa)$ amortized communication and the IC has $O(nL_o + n^2\kappa)$ amortized communication. For input values larger than $\kappa$, our protocols are asymptotically optimal. These results mark a substantial improvement—up to a linear factor, depending on $L_o$—over prior results. To the best of our knowledge, the present paper is the first to achieve the long-term goal of implementing a state machine replication abstraction of a distributed service that is just as fast and efficient as its centralized version, but with greater robustness and availability.
Observing that a weak form of accountability suffices to identify and exclude malicious processes, we propose new efficient and deterministic multi-shot agreement protocols for multi-value validated Byzantine agreement (MVBA) with a strong unanimity validity property (SMVBA) and interactive consistency (IC). Specifically, let $\kappa$ represent the size of the cryptographic objects needed to solve Byzantine agreement when $n<3t$. We achieve both IC and SMVBA with $O(1)$ amortized latency, with a bounded number of slower instances. The SMVBA protocol has $O(nL_o +n\kappa)$ amortized communication and the IC has $O(nL_o + n^2\kappa)$ amortized communication. For input values larger than $\kappa$, our protocols are asymptotically optimal. These results mark a substantial improvement—up to a linear factor, depending on $L_o$—over prior results. To the best of our knowledge, the present paper is the first to achieve the long-term goal of implementing a state machine replication abstraction of a distributed service that is just as fast and efficient as its centralized version, but with greater robustness and availability.
Ahmet Malal
The ASCON algorithm was chosen for its efficiency and suitability for resource-constrained environments such as IoT devices. In this paper, we present a high-performance FPGA implementation of ASCON-128 and ASCON-128a, optimized for the throughput-to-area ratio. By utilizing a 6-round permutation in one cycle for ASCON-128 and a 4-round permutation in one cycle for ASCON-128a, we have effectively maximized throughput while ensuring efficient resource utilization. Our implementation shows significant improvements over existing designs, achieving 34.16\% better throughput-to-area efficiency on Artix-7 and 137.58\% better throughput-to-area efficiency on Kintex-7 FPGAs. When comparing our results on the Spartan-7 FPGA with Spartan-6, we observed a 98.63\% improvement in throughput-to-area efficiency. However, it is important to note that this improvement may also be influenced by the advanced capabilities of the Spartan-7 platform compared to the older Spartan-6, in addition to the design optimizations implemented in this work.
Christoph Graebnitz, Nicolas Buchmann, Martin Seiffert, Marian Margraf
Recently, there has been a growing interest in anonymous credentials (ACs) as they can mitigate the risk of personal data being processed by untrusted actors without consent and beyond the user's control. Furthermore, due to the privacy-by-design paradigm of ACs, they can prove possession of personal attributes, such as an authenticated government document containing sensitive personal information, while preserving the privacy of the individual by not actually revealing the data. Typically, AC specifications consider the privacy of individuals during the presentation of an AC, but often neglect privacy-preserving approaches for enhanced security features such as AC non-duplication or AC revocation. To achieve more privacy-friendly enhanced security features of non-duplication and privacy-preserving revocation, an AC can be partially stored on secure, trusted hardware and linked to a status credential that reflects its revocation status.
In this paper, we specify an AC system that satisfies the requirements of minimality of information, unlinkability, non-duplication, and privacy-preserving revocation.
This is achieved by adapting the hardware binding method of the Direct Anonymous Attestation protocol with the BBS+ short group signatures of Camenisch et al. and combining it with status credentials.
Zoë Ruha Bell, Anvith Thudi
Sampling from non-uniform randomness according to an algorithm which keeps the internal randomness used by the sampler hidden is increasingly important for cryptographic applications, such as timing-attack-resistant lattice-based cryptography or certified differential privacy. In this paper we present a provably efficient sampler that maintains random sample privacy, or random sample hiding, and is applicable to arbitrary discrete random variables. Namely, we present a constant-time version of the classic Knuth-Yao algorithm that we name "trimmed-tree" Knuth-Yao. We establish distribution-tailored Boolean circuit complexity bounds for this algorithm, in contrast to the previous naive distribution-agnostic bounds. For a $\sigma^2$-sub-Gaussian discrete distribution where $b_t$ is the number of bits for representing the domain, and $b_p$ is the bits for precision of the PDF values, we prove the Boolean circuit complexity of the trimmed-tree Knuth-Yao algorithm has upper bound $O(\sigma b_p^{3/2} b_t)$, an exponential improvement over the naive bounds, and in certain parameter regimes establish the lower bound $\widetilde{\Omega}( ( \sigma + b_p ) b_t )$. Moreover, by proving the subtrees in the trimmed-tree Knuth-Yao circuit are small, we prove it can computed by running $b_p$ circuits of size $O(\sigma b_p^{1/2} b_t)$ in parallel and then running $O(b_p b_t )$ sequential operations on the output. We apply these circuits for trimmed-tree Knuth-Yao to constructing random variable commitment schemes for arbitrary discrete distributions, giving exponential improvements in the number of random bits and circuit complexity used for certified differentially private means and counting queries over large datasets and domains.
Momonari Kudo, Kazuhiro Yokoyama
Nowadays, the notion of semi-regular sequences, originally proposed by Fröberg, becomes very important not only in Mathematics, but also in Information Science, in particular Cryptology. For example, it is highly expected that randomly generated polynomials form a semi-regular sequence, and based on this observation, secure cryptosystems based on polynomial systems can be devised. In this paper, we deal with a semi regular sequence and its variant, named a generalized cryptographic semi-regular sequence, and give precise analysis on the complexity of computing a Gröbner basis of the ideal generated by such a sequence with help of several regularities of the ideal related to Lazard's bound on maximal Gröbner basis degree and other bounds. We also study the genericness of the property that a sequence is semi-regular, and its variants related to Fröberg's conjecture. Moreover, we discuss on the genericness of another important property that the initial ideal is weakly reverse lexicographic, related to Moreno-Socías' conjecture, and show some criteria to examine whether both Fröberg's conjecture and Moreno-Socías' one hold at the same time.
Robert Schädlich
Multi-client Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts w.r.t. labels, and a joint decryption of these ciphertexts is possible if and only if (1) all ciphertexts share the same label, and (2) the combination of attributes satisfies the policy of the decryption key. All encryptors have their own secret key and security is preserved even if some of them are known to the adversary.
Very recently, Pointcheval et al. (TCC 2024) presented a semi-generic construction of MC-ABE for restricted function classes, e.g., NC0 and constant-threshold policies. We identify an abstract criterion common to all their policy classes which suffices to present the construction in a fully black-box way and allows for a slight strengthening of the supported policy classes. The construction of Pointcheval et al. is based on pairings. We additionally provide a new lattice-based instantiation from (public-coin) evasive LWE.
Furthermore, we revisit existing constructions for policies that can be viewed as a conjunction of local policies (one per encryptor). Existing constructions from MDDH (Agrawal et al., CRYPTO 2023) and LWE (Francati et al., EUROCRYPT 2023) do not support encryption w.r.t. different labels. We show how this feature can be included. Notably, the security model of Francati et al. additionally guarantees attribute-hiding but does not capture collusions. Our new construction is also attribute-hiding and provides resilience against any polynomially bounded number of collusions which must be fixed at the time of setup.
Very recently, Pointcheval et al. (TCC 2024) presented a semi-generic construction of MC-ABE for restricted function classes, e.g., NC0 and constant-threshold policies. We identify an abstract criterion common to all their policy classes which suffices to present the construction in a fully black-box way and allows for a slight strengthening of the supported policy classes. The construction of Pointcheval et al. is based on pairings. We additionally provide a new lattice-based instantiation from (public-coin) evasive LWE.
Furthermore, we revisit existing constructions for policies that can be viewed as a conjunction of local policies (one per encryptor). Existing constructions from MDDH (Agrawal et al., CRYPTO 2023) and LWE (Francati et al., EUROCRYPT 2023) do not support encryption w.r.t. different labels. We show how this feature can be included. Notably, the security model of Francati et al. additionally guarantees attribute-hiding but does not capture collusions. Our new construction is also attribute-hiding and provides resilience against any polynomially bounded number of collusions which must be fixed at the time of setup.
Simon Damm, Nicolai Kraus, Alexander May, Julian Nowakowski, Jonas Thietke
The Fiat-Shamir transform is one of the most widely applied methods for secure signature construction. Fiat-Shamir starts with an interactive zero-knowledge identification protocol and transforms this via a hash function into a non-interactive signature. The protocol's zero-knowledge property ensures that a signature does not leak information on its secret key $\mathbf s$, which is achieved by blinding $\mathbf s$ via proper randomness $\mathbf y$.
Most prominent Fiat-Shamir examples are DSA signatures and the new post-quantum standard Dilithium.
In practice, DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness $\mathbf y$ per signature. Similar attacks now emerge for lattice-based signatures, such as Dilithium.
We build on, improve and generalize the pioneering leakage attack on Dilithium by Liu, Zhou, Sun, Wang, Zhang, and Ming. In theory, their original attack can recover a 256-dimensional subkey of Dilithium-II (aka ML-DSA-44) from leakage in a single bit of $\mathbf{y}$ per signature, in any bit position $j \geq 6$. However, the memory requirement of their attack grows exponentially in the bit position $j$ of the leak. As a consequence, if the bit leak is in a high-order position, then their attack is infeasible.
In our improved attack, we introduce a novel transformation, that allows us to get rid of the exponential memory requirement. Thereby, we make the attack feasible for $all$ bit positions $j \geq 6$. Furthermore, our novel transformation significantly reduces the number of required signatures in the attack.
The attack applies more generally to all Fiat-Shamir-type lattice-based signatures. For a signature scheme based on module LWE over an $\ell$-dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a $\frac{1}{\ell}$-fraction of the secret key. In the ring LWE setting, which can be seen as module LWE with $\ell = 1$, the attack thus recovers the whole key. For Dilithium-II, which uses $\ell = 4$, knowledge of a $\frac{1}{4}$-fraction of the 1024-dimensional secret key lets its security estimate drop significantly from $128$ to $84$ bits.
In practice, DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness $\mathbf y$ per signature. Similar attacks now emerge for lattice-based signatures, such as Dilithium.
We build on, improve and generalize the pioneering leakage attack on Dilithium by Liu, Zhou, Sun, Wang, Zhang, and Ming. In theory, their original attack can recover a 256-dimensional subkey of Dilithium-II (aka ML-DSA-44) from leakage in a single bit of $\mathbf{y}$ per signature, in any bit position $j \geq 6$. However, the memory requirement of their attack grows exponentially in the bit position $j$ of the leak. As a consequence, if the bit leak is in a high-order position, then their attack is infeasible.
In our improved attack, we introduce a novel transformation, that allows us to get rid of the exponential memory requirement. Thereby, we make the attack feasible for $all$ bit positions $j \geq 6$. Furthermore, our novel transformation significantly reduces the number of required signatures in the attack.
The attack applies more generally to all Fiat-Shamir-type lattice-based signatures. For a signature scheme based on module LWE over an $\ell$-dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a $\frac{1}{\ell}$-fraction of the secret key. In the ring LWE setting, which can be seen as module LWE with $\ell = 1$, the attack thus recovers the whole key. For Dilithium-II, which uses $\ell = 4$, knowledge of a $\frac{1}{4}$-fraction of the 1024-dimensional secret key lets its security estimate drop significantly from $128$ to $84$ bits.
Renas Bacho, Alireza Kavousi
Distributed Key Generation (DKG) protocols are fundamental components of threshold cryptography, enabling key generation in a trustless manner for a range of cryptographic operations such as threshold encryption and signing. Of particular widespread use are DKG protocols for discrete-logarithm based cryptosystems. In this Systematization of Knowledge (SoK), we present a comprehensive analysis of existing DKG protocols in the discrete-logarithm setting, with the goal of identifying cryptographic techniques and design principles that facilitate the development of secure and resilient protocols. To offer a structured overview of the literature, we adopt a modular approach and classify DKG protocols based on their underlying network assumption and cryptographic tools. These two factors determine how DKG protocols manage secret sharing and reach consensus as their essential building blocks. We also highlight various insights and suggest future research directions that could drive further advancements in this area.
Aviv Frenkel, Dmitry Kogan
We present an attack on the Abstract Datagram
Network Layer (ADNL) protocol used in The Open Network
(TON), currently the 10th largest blockchain by market cap-
italization. In its TCP variant, ADNL secures communication
between clients and specialized nodes called liteservers, which
provide access to blockchain data. We identify two crypto-
graphic design flaws in this protocol: a handshake that permits
session-key replay and a non-standard integrity mechanism
whose security critically depends on message confidentiality.
We transform these vulnerabilities into an efficient plaintext-
recovery attack by exploiting two ADNL communication pat-
terns, allowing message reordering across replayed sessions.
We then develop a plaintext model for this scenario and con-
struct an efficient algorithm that recovers the keystream using
a fraction of known plaintexts and a handful of replays. We
implement our attack and show that an attacker intercepting
the communication between a TON liteserver and a widely de-
ployed ADNL client can recover the keystream used to encrypt
server responses by performing eight connection replays to the
server. This allows the decryption of sensitive data, such as
account balances and user activity patterns. Additionally, the
attacker can modify server responses to manipulate blockchain
information displayed to the client, including account balances
and asset prices.
Fredrik Meisingseth, Christian Rechberger
The literature on computational differential privacy (CDP) has focused almost exclusively on definitions that are computational analogs of `pure' $(\epsilon,0)$-DP. We initiate the formal study of computational versions of approximate DP, i.e. $(\epsilon, \delta)$-DP with non-negligible $\delta$. We focus on IND-CDP and SIM$_{\forall\exists}$-CDP and show that the hierarchy between them when $\delta > 0$ potentially differs substantially from when $\delta = 0$. In one direction, we show that for $\delta < 1$, any mechanism which is $(\epsilon,\delta)$-SIM$_{\forall\exists}$-CDP also is $(\epsilon,\delta)$-IND-CDP, but only if $\epsilon$ is logarithmic in the security parameter. As a special case, this proves that the existing implication from $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP to $(\epsilon,0)$-IND-CDP does not hold for arbitrary $\epsilon$, as previously claimed. Furthermore, we prove that when the parameters are the same in IND-CDP and SIM$_{\forall\exists}$-CDP and $\epsilon$ is superlogarithmic, there exists a natural task that can be solved whilst satisfying SIM$_{\forall\exists}$-CDP but which no IND-CDP mechanism can solve. This is the first separation in the CDP literature which is not due to using a task contrived specifically in order to give rise to the separation.
In the other direction, we show that the techniques for establishing an implication from $(\epsilon,0)$-IND-CDP to $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP extend only to that a mechanism being $(\epsilon,\delta)$-IND-CDP implies it is also $(\epsilon,\delta')$-SIM$_{\forall\exists}$-CDP with $\delta' > \delta$. Finally, we show that the Groce-Katz-Yerukhimovich barrier results against separations between CDP and statistical DP hold also in the setting of non-negligible $\delta$.
Xufeng Zhang, Baohan Huang, Sisi Duan, Haibin Zhang
Most practical synchronous Byzantine fault-tolerant (BFT) protocols, such as Sync HotStuff (S&P 2020), follow the convention of partially synchronous BFT and adopt a deterministic design. Indeed, while these protocols achieve O(n) time complexity, they exhibit impressive performance in failure-free scenarios.
This paper challenges this conventional wisdom, showing that a randomized paradigm terminating in expected O(1) time may well outperform prior ones even in the failure-free scenarios. Our framework reduces synchronous BFT to a new primitive called multi-valued Byzantine agreement with strong external validity (MBA-SEV). Inspired by the external validity property of multi-valued validated Byzantine agreement (MVBA), the additional validity properties allow us to build a BFT protocol where replicas agree on the hashes of the blocks. Our instantiation of the paradigm, Sonic, achieves O(n) amortized message complexity per block proposal, expected O(1) time, and enables a fast path of only two communication step.
Our evaluation results using up to 91 instances on Amazon EC2 show that the peak throughput of Sonic and P-Sonic (a pipelining variant of Sonic) is 2.24x-14.52x and 3.08x-24.25x that of Sync HotStuff, respectively.
This paper challenges this conventional wisdom, showing that a randomized paradigm terminating in expected O(1) time may well outperform prior ones even in the failure-free scenarios. Our framework reduces synchronous BFT to a new primitive called multi-valued Byzantine agreement with strong external validity (MBA-SEV). Inspired by the external validity property of multi-valued validated Byzantine agreement (MVBA), the additional validity properties allow us to build a BFT protocol where replicas agree on the hashes of the blocks. Our instantiation of the paradigm, Sonic, achieves O(n) amortized message complexity per block proposal, expected O(1) time, and enables a fast path of only two communication step.
Our evaluation results using up to 91 instances on Amazon EC2 show that the peak throughput of Sonic and P-Sonic (a pipelining variant of Sonic) is 2.24x-14.52x and 3.08x-24.25x that of Sync HotStuff, respectively.
Yaobin Shen, Lei Wang, Dawu Gu
Key derivation functions can be used to derive variable-length random strings that serve as cryptographic keys. They are integral to many widely-used communication protocols such as TLS, IPsec and Signal. NIST SP 800-108 specifies several key derivation functions based on pseudorandom functions such as \mode{CMAC} and \mode{HMAC}, that can be used to derive additional keys from an existing cryptographic key. This standard either explicitly or implicitly requests their KDFs to be variable output length pseudorandom function, collision resistant, and preimage resistant. Yet, since the publication of this standard dating back to the year of 2008, until now, there is no formal analysis to justify these security properties of KDFs.
In this work, we give the formal security analysis of key derivation functions in NIST SP 800-108. We show both positive and negative results regarding these key derivation functions. For KCTR-CMAC, KFB-CMAC, and KDPL-CMAC that are key derivation functions based on CMAC in counter mode, feedback mode, and double-pipeline mode respectively, we prove that all of them are secure variable output length pseudorandom functions and preimage resistance. We show that KFB-CMAC and KDPL-CMAC are collision resistance. While for KCTR-CMAC, we can mount collision attack against it that requires only six block cipher queries and can succeed with probability 1/4. For KCTR-HMAC, KFB-HMAC, and KDPL-HMAC that are key derivation functions based on HMAC in modes, we show that all of them behave like variable output length pseudorandom functions. When the key of these key derivation functions is of variable length, they suffer from collision attacks. For the case when the key of these key derivation function is of fixed length and less than $d-1$ bits where $d$ is the input block size of the underlying compression function, we can prove that they are collision resistant and preimage resistant.
In this work, we give the formal security analysis of key derivation functions in NIST SP 800-108. We show both positive and negative results regarding these key derivation functions. For KCTR-CMAC, KFB-CMAC, and KDPL-CMAC that are key derivation functions based on CMAC in counter mode, feedback mode, and double-pipeline mode respectively, we prove that all of them are secure variable output length pseudorandom functions and preimage resistance. We show that KFB-CMAC and KDPL-CMAC are collision resistance. While for KCTR-CMAC, we can mount collision attack against it that requires only six block cipher queries and can succeed with probability 1/4. For KCTR-HMAC, KFB-HMAC, and KDPL-HMAC that are key derivation functions based on HMAC in modes, we show that all of them behave like variable output length pseudorandom functions. When the key of these key derivation functions is of variable length, they suffer from collision attacks. For the case when the key of these key derivation function is of fixed length and less than $d-1$ bits where $d$ is the input block size of the underlying compression function, we can prove that they are collision resistant and preimage resistant.
Luca Campa, Arnab Roy
Arithmetization-Oriented (AO) symmetric primitives play an important role in the efficiency and security of zero-knowledge (ZK) proof systems. The design and cryptanalysis of AO symmetric-key primitives is a new topic particularly focusing on algebraic aspects. An efficient AO hash function aims at lowering the multiplicative complexity in the arithmetic circuit of the hash function over a suitable finite field. The AO hash function Anemoi was proposed in CRYPTO 2023.
In this work we present an in-depth Groebner basis (GB) cryptanalysis of Anemoi over GF(p). The main aim of any GB cryptanalysis is to obtain a well-structured set of polynomials representing the target primitive, and finally solve this system of polynomials using an efficient algorithm.
We propose a new polynomial modelling for Anemoi that we call ACICO. We show that using ACICO one can obtain a GB defined by a well-structured set of polynomials. Moreover, by utilising ACICO we can prove the exact complexity of the Groebner basis computation (w.r.t Buchberger's algorithm) in the cryptanalysis of Anemoi. The structured GB further allows us to prove the dimension of the quotient space which was conjectured in a recently published work. Afterwards, we provide the complexity analysis for computing the variety (or the solutions) of the GB polynomial system (corresponding to Anemoi) which is the final step in GB cryptanalysis, by using known approaches. In particular, we show that GB polynomial structure allows us to use the Wiedemann algorithm and improve the efficiency of cryptanalysis compared to previous works.
Our GB cryptanalysis is applicable to more than two branches (a parameter in Anemoi), while the previously published results showed cryptanalysis only for two branches. Our complexity analysis implies that the security of Anemoi should not be relied upon the GB computation.
We also address an important mathematical question in GB cryptanalysis of Anemoi namely, does the Anemoi polynomial system has a Shape form?, positively. By proving this we guarantee that upon application of basis conversion method like FGLM one can obtain a convenient system of polynomials that is easy to solve.
In this work we present an in-depth Groebner basis (GB) cryptanalysis of Anemoi over GF(p). The main aim of any GB cryptanalysis is to obtain a well-structured set of polynomials representing the target primitive, and finally solve this system of polynomials using an efficient algorithm.
We propose a new polynomial modelling for Anemoi that we call ACICO. We show that using ACICO one can obtain a GB defined by a well-structured set of polynomials. Moreover, by utilising ACICO we can prove the exact complexity of the Groebner basis computation (w.r.t Buchberger's algorithm) in the cryptanalysis of Anemoi. The structured GB further allows us to prove the dimension of the quotient space which was conjectured in a recently published work. Afterwards, we provide the complexity analysis for computing the variety (or the solutions) of the GB polynomial system (corresponding to Anemoi) which is the final step in GB cryptanalysis, by using known approaches. In particular, we show that GB polynomial structure allows us to use the Wiedemann algorithm and improve the efficiency of cryptanalysis compared to previous works.
Our GB cryptanalysis is applicable to more than two branches (a parameter in Anemoi), while the previously published results showed cryptanalysis only for two branches. Our complexity analysis implies that the security of Anemoi should not be relied upon the GB computation.
We also address an important mathematical question in GB cryptanalysis of Anemoi namely, does the Anemoi polynomial system has a Shape form?, positively. By proving this we guarantee that upon application of basis conversion method like FGLM one can obtain a convenient system of polynomials that is easy to solve.
Christodoulos Pappas, Dimitris Papadopoulos, Charalampos Papamanthou
In this work, we introduce HydraProofs, the first vector commitment (VC) scheme that achieves the following two properties. (i) The prover can produce all the opening proofs for different elements (or consecutive sub-arrays) for a vector of size N in optimal time O(N). (ii) It is directly compatible with a family of zkSNARKs that encode their input as a multi-linear polynomial, i.e., our VC can be directly used when running the zkSNARK on its pre-image, without the need to open'' the entire vector pre-image inside the zkSNARK. To the best of our knowledge, all prior VC schemes either achieve (i) but are not efficiently pluggable'' into zkSNARKs (e.g., a Merkle tree commitment that requires re-computing the entire hash tree inside the circuit), or achieve (ii) but take (NlogN) time. We then combine HydraProofs with the seminal GKR protocol and apply the resulting zkSNARK in a setting where multiple users participate in a computation executed by an untrusted server and each user wants to ensure the correctness of the result and that her data was included. Our experimental evaluation shows our approach outperforms prior ones by 4-16x for prover times on general circuits. Finally, we consider two concrete application use cases, verifiable secret sharing and verifiable robust aggregation. For the former, our construction achieves the first scheme for Shamir's secret sharing with linear time prover (lower than the time needed for the dealer computation). For the second we propose a scheme that works against misbehaving aggregators and our experiments show it can be reasonably deployed in existing schemes with minimal slow-downs.
Nouri Alnahawi, Melissa Azouaoui, Joppe W. Bos, Gareth T. Davies, SeoJeong Moon, Christine van Vredendaal, Alexander Wiesmaier
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
Azade Rezaeezade, Trevor Yap, Dirmanto Jap, Shivam Bhasin, Stjepan Picek
We present a dataset of side-channel power measurements captured during pair-pointwise multiplication in the decapsulation procedure of the Kyber Key Encapsulation Mechanism (KEM). The dataset targets the pair-pointwise multiplication step in the NTT domain, a key computational component of Kyber. The dataset is collected using the reference implementation from the PQClean project. We hope the dataset helps in research in ``classical'' power analysis and deep learning-based side-channel attacks on post-quantum cryptography (PQC).
Seunghwan Lee, Jaesang Noh, Taejeong Kim, Dohyuk Kim, Dong-Joon Shin
SPDZ-style and BMR-style protocols are widely known as practical MPC protocols that achieve active security in the dishonest majority setting. However, to date, SPDZ-style protocols have not achieved constant rounds, and BMR-style protocols have struggled to achieve scalable communication or computation. Additionally, there exists fully homomorphic encryption (FHE)-based MPC protocols that achieve both constant rounds and scalable communication, but they face challenges in achieving active security in the dishonest majority setting and are considered impractical due to computational inefficiencies.
In this work, we propose an MPC framework that constructs an efficient and scalable FHE-based MPC protocol by integrating a linear secret sharing scheme (LSSS)-based MPC and FHE. The resulting FHE-based MPC protocol achieves active security in the dishonest majority setting and constant complexity in online communication, computation per gate, rounds, and private input size. Notably, when instantiated with the SPDZ protocol and gate FHE for the framework, the resulting FHE-based MPC protocol efficiently achieves active security in the dishonest majority setting by using SPDZ-style MAC and ensures the computation per gate time within 3 ms. Moreover, its offline phase achieves scalable communication and computation, both of which grow linearly with the number of parties $n$. In other words, the proposed FHE-based MPC preserves the key advantages of existing FHE-based MPCs and simultaneously overcomes the weaknesses of them. As a result, the proposed FHE-based MPC is a highly practical and secure like SPDZ-style and BMR-style protocols.
For the first time, we introduce the concept of circuit-privacy, which ensures that external adversaries who eavesdrop on communications do not obtain information about the circuit. We rigorously prove that our construction inherently satisfy circuit- privacy, thereby establishing a novel security option for MPC.
In this work, we propose an MPC framework that constructs an efficient and scalable FHE-based MPC protocol by integrating a linear secret sharing scheme (LSSS)-based MPC and FHE. The resulting FHE-based MPC protocol achieves active security in the dishonest majority setting and constant complexity in online communication, computation per gate, rounds, and private input size. Notably, when instantiated with the SPDZ protocol and gate FHE for the framework, the resulting FHE-based MPC protocol efficiently achieves active security in the dishonest majority setting by using SPDZ-style MAC and ensures the computation per gate time within 3 ms. Moreover, its offline phase achieves scalable communication and computation, both of which grow linearly with the number of parties $n$. In other words, the proposed FHE-based MPC preserves the key advantages of existing FHE-based MPCs and simultaneously overcomes the weaknesses of them. As a result, the proposed FHE-based MPC is a highly practical and secure like SPDZ-style and BMR-style protocols.
For the first time, we introduce the concept of circuit-privacy, which ensures that external adversaries who eavesdrop on communications do not obtain information about the circuit. We rigorously prove that our construction inherently satisfy circuit- privacy, thereby establishing a novel security option for MPC.
Thomas de Ruijter, Jan-Pieter D'Anvers, Ingrid Verbauwhede
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without revealing any information about the data itself. However, FHE ciphertexts include noise for security reasons, which increases during operations and can lead to decryption errors. This paper addresses the noise introduced during bootstrapping in Torus Fully Homomorphic Encryption (TFHE), particularly focusing on approximation errors during modulus switching and gadget decomposition. We propose a mean compensation technique that removes the mean term from the noise equations, achieving up to a twofold reduction in noise variance. This method can be combined with bootstrap key unrolling for further noise reduction. Mean compensation can reduce the error probability of a standard parameter set from $2^{-64.30}$ to $2^{-100.47}$, or allows the selection of more efficient parameters leading to a speedup of bootstrapping up to a factor $2.14\times$.
Viktória I. Villányi, Vladimir Božović
Attribute-based encryption can be considered a generalization of public key encryption, enabling fine-grained access control over
encrypted data using predetermined access policies. In general, we distinguish between key-policy and ciphertext-policy attribute-based encryption schemes. Our new scheme is built upon the multi-authority
attribute-based encryption with an honest-but-curious central authority
scheme in a key-policy setting presented earlier by Božović et al., and it
can be considered an extension of their scheme. In their paper, trust was
shared between the central authority and the participating authorities,
who were responsible for issuing attribute-specific secret keys. The central authority was not capable of decrypting any message as long as there
exists an honest attribute authority. In our new scheme, we maintain this
feature, and add another level of security by allowing users to participate
in the key generation process and contribute to the final user-specific attribute secret keys. Users gain more control over their own secret keys,
and they will be the only parties with access to the final user-specific
secret keys. Furthermore, no secure channels, only authenticated communication channels are needed between users and authorities. After the
modifications our scheme will be closer to the registered multi-authority
attribute-based encryption. We refer to our scheme as a partially registered type of multi-authority attribute-based encryption scheme. We
prove the security of our scheme in the Selective-ID model.