IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 April 2025
Shymaa M. Arafat
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
11 April 2025
Nobuyuki Sugio
SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-64 using the Constraint Programming (CP) and reveal 56 types of impossible differential distinguishers up to 11 rounds. Furthermore, we demonstrate a key recovery attack on 17-round SAND-64. The complexities for the attack require $2^{56}$ data, $2^{127}$ encryptions, and $2^{60}$ bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-64, this attack does not threaten the security of SAND-64 against impossible differential attack.
Daniel Escudero, Elisaweta Masserova, Antigoni Polychroniadou
The YOSO (You Only Speak Once) model, introduced by Gentry et al. (CRYPTO 2021), helps to achieve strong security guarantees in cryptographic protocols for distributed settings, like blockchains, with large number of parties. YOSO protocols typically employ smaller anonymous committees to execute individual rounds of the protocol instead of having all parties execute the entire protocol. After completing their tasks, parties encrypt protocol messages for the next anonymous committee and erase their internal state before publishing ciphertexts, thereby enhancing security in dynamically changing environments.
In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting. In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for $0<\epsilon<1/2$ and an adversary corrupting $t0$, the sizes of the committees are only marginally increased, while online communication is significantly reduced.
Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting. In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for $0<\epsilon<1/2$ and an adversary corrupting $t0$, the sizes of the committees are only marginally increased, while online communication is significantly reduced.
Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
Sneha Mohanty, Christian Schindelhauer
We introduce a novel symmetric key cryptographic scheme involving a light ray's interaction with a 2D cartesian coordinate setup, several smaller boxes within this setup, of either reflection or refraction type and $1^{st}$, $2^{nd}$ or $3^{rd}$ degree polynomial curves inside each of these smaller boxes. We also incorporate boolean logic gates of types XOR, NOT-Shift and Permutation which get applied to the light ray after each interaction with a reflecting or refracting polynomial curve. This alternating interaction between Optical gates (polynomial curves) and Non-optical gates creates a complex and secure cryptographic system. Furthermore, we design and launch customized attacks on our cryptographic system and discuss the robustness of it against these.
Andreas Hülsing, Mikhail Kudinov, Christian Majenz
In this paper, we present an improved framework for proving query bounds in the Quantum Random Oracle Model (QROM) for algorithms with both quantum and classical query interfaces, where the classical input is partially controlled by the adversary. By extending existing techniques, we develop a method to bound the progress an adversary can make with such partial-control classical queries. While this framework is applicable to different hash function properties, we decided to demonstrate the impact of the new techniques by giving an analysis of the multi-target extended target collision resistance property (m-eTCR). This new approach allows us to achieve an improved bound that significantly reduces the required function key size. Our proof is tight in terms of query complexity and has significant implications for cryptographic applications, especially for signature schemes in the hash & sign paradigm, enabling more efficient instantiations with reduced salt sizes and smaller signature lengths. For an example of multiple signatures aggregation, we achieve a signature size of 30 kB smaller.
Lorenz Panny
In the McEliece public-key encryption scheme, a private key is almost always not determined uniquely by its associated public key. This paper gives a structural characterization of equivalent private keys, generalizing a result known for the more approachable special case $\lvert L\rvert=q$.
These equivalences reduce the cost estimate for a simple private-key search using the support-splitting algorithm (SSA) by a polynomial but practically very substantial factor.
We provide an optimized software implementation of the SSA for this kind of key search and demonstrate its capabilities in practice by solving a key-recovery challenge with a naïve a‑priori cost estimate of $2^{83}$ bit operations in just ${\approx}\,1400$ core days, testing ${\approx}\,9400$ private-key candidates per core and second in the process.
We stress that the speedup from those equivalences is merely polynomial and does not indicate any weakness in realistic instantiations of the McEliece cryptosystem, whose parameter choices are primarily constrained by decoding attacks rather than ludicrously more expensive key-recovery attacks.
Aniket Kate, Pratyay Mukherjee, Samipa Samanta, Pratik Sarkar
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of $t$ signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized.
In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to.
So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the $first$ c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in $0.35$s within a committee of $80$ signers. This yields a $4.9\times$ improvement over hinTS for signature generation at the cost of increasing signature verification time by $4\%$ over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to.
So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the $first$ c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in $0.35$s within a committee of $80$ signers. This yields a $4.9\times$ improvement over hinTS for signature generation at the cost of increasing signature verification time by $4\%$ over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Cong Zhang, Liqiang Peng, Weiran Liu, Shuaishuai Li, Meng Hao, Lei Zhang, Dongdai Lin
The online realm has witnessed a surge in the buying and selling of data, prompting the emergence of dedicated data marketplaces. These platforms cater to servers (sellers), enabling them to set prices for access to their data, and clients (buyers), who can subsequently purchase these data, thereby streamlining and facilitating such transactions. However, the current data market is primarily confronted with the following issues. Firstly, they fail to protect client privacy, presupposing that clients submit their queries in plaintext. Secondly, these models are susceptible to being impacted by malicious client behavior, for example, enabling clients to potentially engage in arbitrage activities.
To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server's privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI).
We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols. Our payable KPIR achieves the same online cost as baseline, while the setup is about $1.3-1.6\times$ slower than it. Our payable PSI needs about $2\times$ more communication cost than that of baseline protocol, while the runtime is $1.5-3.2\times$ slower than it depending on the network setting.
To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server's privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI).
We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols. Our payable KPIR achieves the same online cost as baseline, while the setup is about $1.3-1.6\times$ slower than it. Our payable PSI needs about $2\times$ more communication cost than that of baseline protocol, while the runtime is $1.5-3.2\times$ slower than it depending on the network setting.
Pedram Hosseyni, Ralf Kuesters, Tim Würtele
We introduce audience injection attacks, a novel class of vulnerabilities that impact widely used Web-based authentication and authorization protocols, including OAuth 2.0, OpenID Connect, FAPI, CIBA, the Device Authorization Grant, and various well-established extensions, such as Pushed Authorization Requests, Token Revocation, Token Introspection, and their numerous combinations.
These protocols underpin services for billions of users across diverse ecosystems worldwide, spanning low-risk applications like social logins to high-risk domains such as open banking, insurance, and healthcare.
Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications.
We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity. In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications.
We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity. In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Pierre-Augustin Berthet, Justine Paillet, Cédric Tavernier, Lilian Bossuet, Brice Colombier
FALCON is a post-quantum signature selected by the National Institute of Standards and Technology (NIST). Although its side-channel resilience has been studied and a masking countermeasure proposed, the division is a major performance bottleneck. This work proposes a different approach to the masked FALCON division. We use the Newton method and a convergent sequence to approximate this operation. The performance of the masked division is improved by a factor 6.7 for two shares and 6.98 for three shares. For the Gaussian sampler, the improvements are of a factor 1.45 for two shares and 1.43 for three shares. Formal security proofs using the MIMO-SNI criteria are also provided.
Yimeng He, San Ling, Khai Hanh Tang, Huaxiong Wang
Group signatures allow a user to sign anonymously on behalf of a group of users while allowing a tracing authority to trace the signer's identity in case of misuse. In Chaum and van Heyst's original model (EUROCRYPT'91), the group needs to stay fixed. Throughout various attempts, including partially dynamic group signatures and revocations, Bootle et al. (ACNS'16, J. Cryptol.) formalized the notion of fully dynamic group signatures (FDGS), enabling both enrolling and revoking users of the group. However, in their scheme, the verification process needs to take into account the latest system information, and a previously generated signature will be invalidated as soon as, for example, there is a change in the group. We therefore raise a research question: Is it possible to construct an FDGS under which the validity of a signature can survive future changes in the system information?
In this paper, we propose Everlasting Fully Dynamic Group Signatures (EFDGS) that allow signers to generate signatures that do not require verification with any specific epoch. Specifically, once the signatures are created, they are valid forever. It also guarantees that the signer can only output such a signature when she is a valid user of the system. We realize the above new model by constructing a plausibly post-quantum standard-lattice-based EFDGS.
In this paper, we propose Everlasting Fully Dynamic Group Signatures (EFDGS) that allow signers to generate signatures that do not require verification with any specific epoch. Specifically, once the signatures are created, they are valid forever. It also guarantees that the signer can only output such a signature when she is a valid user of the system. We realize the above new model by constructing a plausibly post-quantum standard-lattice-based EFDGS.
Hyunjun Kim, Sejin Lim, Kyungbae Jang, Siyi Wang, Anubhab Baksi, Anupam Chattopadhyay, Hwajeong Seo
Quantum computing is regarded as one of the most significant upcoming advancements in computer science.
Although fully operational quantum computers have yet to be realized, they are expected to solve specific problems that are difficult to solve using classical computers.
Given the limitations of quantum computing resources, it is crucial to design compact quantum circuits for core operations, such as quantum arithmetic.
In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2). Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder. Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions. To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits.
Our tree-based carry-save adder achieves the lowest circuit depth ($T$-depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition. Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2). Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder. Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions. To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits.
Our tree-based carry-save adder achieves the lowest circuit depth ($T$-depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition. Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
Song Bian, Yunhao Fu, Dong Zhao, Haowen Pan, Yuexiang Jin, Jiayue Sun, Hui Qiao, Zhenyu Guan
We propose an encrypted controller framework for linear time-invariant systems with actuator non-linearity based on fully homomorphic encryption (FHE). While some existing works explore the use of partially homomorphic encryption (PHE) in implementing linear control systems, the impacts of the non-linear behaviors of the actuators on the systems are often left unconcerned. In particular, when the inputs to the controller become too small or too large, actuators may burn out due to unstable system state oscillations. To solve this dilemma, we design and implement FHECAP, an FHE-based controller framework that can homomorphically apply non-linear functions to the actuators to rectify the system inputs. In FHECAP, we first design a novel data encoding scheme tailored for efficient gain matrix evaluation. Then, we propose a high-precision homomorphic algorithm to apply non-arithmetic piecewise function to realize the actuator normalization. In the experiments, compared with the existing state-of-the-art encrypted controllers, FHECAP achieves $4\times$--$1000\times$ reduction in computational latency. We evaluate the effectiveness of FHECAP in the real-world application of encrypted control for spacecraft rendezvous. The simulation results show that the FHECAP achieves real-time spacecraft rendezvous with negligible accuracy loss.
Anand Kumar Narayanan
Weyman and Zelevinsky generalised Vandermonde matrices to higher dimensions, which we call Vandermonde-Weyman-Zelevinsky tensors.
We generalise Lagrange interpolation to higher dimensions by devising a nearly linear time algorithm that given a Vandermonde-Weyman-Zelevinsky tensor and a sparse target vector, finds a tuple of vectors that hit the target under tensor evaluation. Tensor evaluation to us means evaluating the usual multilinear form associated with the tensor in all but one chosen dimension. Yet, this interpolation problem phrased with respect to a random tensor appears to be a hard multilinear system. Leveraging this dichotomy, we propose preimage sampleable trapdoor one-way functions in the spirit of Gentry-Peikert-Vaikuntanathan (GPV) lattice trapdoors. We design and analyse ``Hash-and-Sign'' digital signatures from such trapdoor one-way functions, yielding short signatures whose lengths scale nearly linearly in the security parameter. We also describe an encryption scheme.
Our trapdoor is a random Vandermonde-Weyman-Zelevinsky tensor over a finite field and a random basis change. We hide the Vandermonde-Weyman-Zelevinsky tensor under the basis change and publish the resulting pseudorandom tensor. The one way function is the tensor evaluation derived from the public tensor, restricted so as to only map to sparse vectors. We then design the domain sampler and preimage sampler demanded by the GPV framework. The former samples inputs that map to uniform images under the one-way function. The latter samples preimages given supplementary knowledge of the trapdoor. Preimage sampling is a randomised version of interpolation and knowing the basis change allows efficient translation between interpolation corresponding to the public and trapdoor tensors. An adversary seeking a preimage must solve a pseudorandom multilinear system, which seems cryptographically hard.
Our trapdoor is a random Vandermonde-Weyman-Zelevinsky tensor over a finite field and a random basis change. We hide the Vandermonde-Weyman-Zelevinsky tensor under the basis change and publish the resulting pseudorandom tensor. The one way function is the tensor evaluation derived from the public tensor, restricted so as to only map to sparse vectors. We then design the domain sampler and preimage sampler demanded by the GPV framework. The former samples inputs that map to uniform images under the one-way function. The latter samples preimages given supplementary knowledge of the trapdoor. Preimage sampling is a randomised version of interpolation and knowing the basis change allows efficient translation between interpolation corresponding to the public and trapdoor tensors. An adversary seeking a preimage must solve a pseudorandom multilinear system, which seems cryptographically hard.
Tomer Keniagin, Eitan Yaakobi, Ori Rottenstreich
Set reconciliation is a fundamental task in distributed systems, particularly in blockchain networks, where it enables the synchronization of transaction pools among peers and facilitates block dissemination. Existing traditional set reconciliation schemes are either statistical, providing success probability as a function of the communication overhead and the size of the symmetric difference, or require parametrization and estimation of the size of the symmetric difference, which can be prone to error. In this paper, we present CertainSync, a novel reconciliation framework that, to the best of our knowledge, is the first to guarantee successful set reconciliation without any parametrization or estimators in use. The framework is rateless and adapts to the unknown symmetric difference size. The set reconciliation is guaranteed to be completed successfully whenever the communication overhead reaches a lower bound derived from the symmetric difference size and the universe size. Our framework is based on recent constructions of Invertible Bloom Lookup Tables (IBLTs) ensuring successful element listing as long as the number of elements is bounded. We provide a theoretical analysis to prove the certainty in the set reconciliation for multiple constructions. The approach is also validated by simulations, showing the ability to synchronize sets with efficient communication costs while maintaining reconciliation guarantees compared to other baseline schemes for set reconciliation. To further improve communication overhead for large universes as blockchain networks, CertainSync is extended with a universe reduction technique to minimize communication overhead. We compare and validate the extended framework UniverseReduceSync against the basic CertainSync framework through simulations using real blockchain transaction hash data from the Ethereum blockchain network. The results illustrate a trade-off between improved communication costs and maintaining reconciliation guarantees without relying on parametrization or estimators, offering a comprehensive solution for set reconciliation in diverse scenarios.
Yackolley Amoussou-Guenou, Lionel Beltrando, Maurice Herlihy, Maria Potop-Butucaru
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator, even if some processes (possibly including the initiator) are Byzantine. In asynchronous settings, it is known since the prominent work of Bracha \cite{Bracha87} that Byzantine reliable broadcast can be implemented deterministically if the total number of processes, denoted by $n$, satisfies $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with \emph{trusted components}, special software or hardware designed to prevent equivocation. Our contribution is threefold. First, we show that, despite common belief, when each process is equipped with a trusted component, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single trusted component (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
\yag{Lastly, building on our broadcast algorithm, we present TenderTee, a transformation of the Tendermint consensus algorithm by using trusted component, giving better Byzantine resilience. Tendertee works with $n \geq 2t+1$, where Tendermint needed $n=3t+1$.}
Sanjay Deshpande, Yongseok Lee, Cansu Karakuzu, Jakub Szefer, Yunheung Paek
This work presents SPHINCSLET, the first fully standard-compliant and area-efficient hardware implementation of the SLH-DSA algorithm, formerly known as SPHINCS+, a post-quantum digital signature scheme. SPHINCSLET is designed to be parameterizable across different security levels and hash functions, offering a balanced trade-off between area efficiency and performance. Existing hardware implementations either feature a large area footprint to achieve fast signing and verification or adopt a coprocessor-based approach that significantly slows down these operations. SPHINCSLET addresses this gap by delivering a 4.7$\times$ reduction in area compared to high-speed designs while achieving a 2.5$\times$ to 5$\times$ improvement in signing time over the most efficient coprocessor-based designs for a SHAKE256-based SPHINCS+ implementation. The SHAKE256-based SPHINCS+ FPGA implementation targeting the AMD Artix-7 requires fewer than 10.8K LUTs for any security level of SLH-DSA. Furthermore, the SHA-2-based SPHINCS+ implementation achieves a 2$\times$ to 4$\times$ speedup in signature generation across various security levels compared to existing SLH-DSA hardware, all while maintaining a compact area footprint of 6K to 15K LUTs. This makes it the fastest SHA-2-based SLH-DSA implementation to date. With an optimized balance of area and performance, SPHINCSLET can assist resource-constrained devices in transitioning to post-quantum cryptography.
Alhad Daftardar, Jianqiao Mo, Joey Ah-kiow, Benedikt Bünz, Ramesh Karri, Siddharth Garg, Brandon Reagen
(Preprint) Zero-Knowledge Proofs (ZKPs) are rapidly gaining importance in privacy-preserving and verifiable computing. ZKPs enable a proving party to prove the truth of a statement to a verifying party without revealing anything else. ZKPs have applications in blockchain technologies, verifiable machine learning, and electronic voting, but have yet to see widespread adoption due to the computational complexity of the proving process.Recent works have accelerated the key primitives of state-of-the-art ZKP protocols on GPU and ASIC. However, the protocols accelerated thus far face one of two challenges: they either require a trusted setup for each application, or they generate larger proof sizes with higher verification costs, limiting their applicability in scenarios with numerous verifiers or strict verification time constraints. This work presents an accelerator, zkSpeed, for HyperPlonk, a state-of-the-art ZKP protocol that supports both one-time, universal setup and small proof sizes for typical ZKP applications in publicly verifiable, consensus-based systems. We accelerate the entire protocol, including two major primitives: SumCheck and Multi-scalar Multiplications (MSMs). We develop a full-chip architecture using 366.46 mm$^2$ and 2 TB/s of bandwidth to accelerate the entire proof generation process, achieving geometric mean speedups of 801$\times$ over CPU baselines.
Nicolas Desmoulins, Antoine Dumanois, Seyni Kane, Jacques Traoré
eIDAS 2.0 (electronic IDentification, Authentication and trust Services) is a very ambitious regulation aimed at equipping European citizens with a personal digital identity wallet (EU Digital Identity Wallet) on a mobile phone that not only needs to achieve a high level of security, but also needs to be available as soon as possible for a large number of citizens and respect their privacy (as per GDPR - General Data Protection Regulation).
In this paper, we introduce the foundations of a digital identity wallet solution that could help move closer to this objective by leveraging the proven anonymous credentials system BBS (Eurocrypt 2023), also known as BBS+, but modifying it to avoid the limitations that have hindered its widespread adoption, especially in certified infrastructures requiring trusted hardware implementation. In particular, the solution we propose, which we call BBS#, does not rely, contrary to BBS/BBS +, on bilinear maps and pairing-friendly curves (which are not supported by existing hardware) and only depends on the hardware implementation of well-known digital signature schemes such as ECDSA (ISO/IEC 14888-3) or ECSDSA (also known as ECSchnorr, ISO/IEC 14888-3) using classical elliptic curves. More precisely, BBS# can be rolled out without requiring any change in existing hardware or the algorithms that hardware supports. BBS# , which is proven secure in the random oracle model, retains the well-known security property (unforgeability of the credentials under the (gap) q-SDH assumption) and anonymity properties (multi-show full unlinkability and statistical anonymity of presentation proofs) of BBS/BBS+.
By implementing BBS# on several smartphones using different secure execution environments, we show that it is possible to achieve eIDAS 2.0 transactions which are not only efficient (around 70 ms on Android StrongBox), secure and certifiable at the highest level but also provide strong (optimal) privacy protection for all European ID Wallet users.
In this paper, we introduce the foundations of a digital identity wallet solution that could help move closer to this objective by leveraging the proven anonymous credentials system BBS (Eurocrypt 2023), also known as BBS+, but modifying it to avoid the limitations that have hindered its widespread adoption, especially in certified infrastructures requiring trusted hardware implementation. In particular, the solution we propose, which we call BBS#, does not rely, contrary to BBS/BBS +, on bilinear maps and pairing-friendly curves (which are not supported by existing hardware) and only depends on the hardware implementation of well-known digital signature schemes such as ECDSA (ISO/IEC 14888-3) or ECSDSA (also known as ECSchnorr, ISO/IEC 14888-3) using classical elliptic curves. More precisely, BBS# can be rolled out without requiring any change in existing hardware or the algorithms that hardware supports. BBS# , which is proven secure in the random oracle model, retains the well-known security property (unforgeability of the credentials under the (gap) q-SDH assumption) and anonymity properties (multi-show full unlinkability and statistical anonymity of presentation proofs) of BBS/BBS+.
By implementing BBS# on several smartphones using different secure execution environments, we show that it is possible to achieve eIDAS 2.0 transactions which are not only efficient (around 70 ms on Android StrongBox), secure and certifiable at the highest level but also provide strong (optimal) privacy protection for all European ID Wallet users.
Jayamine Alupotha, Mariarosaria Barbaraci, Ioannis Kaklamanis, Abhimanyu Rawat, Christian Cachin, Fan Zhang
Modern life makes having a digital identity no longer optional, whether one needs to manage a bank account or subscribe to a newspaper. As the number of online services increases, it is fundamental to safeguard user privacy and equip service providers (SP) with mechanisms enforcing Sybil resistance, i.e., preventing a single entity from showing as many.
Current approaches, such as anonymous credentials and self-sovereign identities, typically rely on identity providers or identity registries trusted not to track users' activities. However, this assumption of trust is no longer appropriate in a world where user data is considered a valuable asset.
To address this challenge, we introduce a new cryptographic notion, Anonymous Self-Credentials (ASC) along with two implementations. This approach enables users to maintain their privacy within an anonymity set while allowing SPs to obtain Sybil resistance. Then, we present a User-issued Unlinkable Single Sign-On (U2SSO) implemented from ASC that solely relies on an identity registry to immutably store identities. A U2SSO solution allows users to generate unlinkable child credentials for each SP using only one set of master credentials.
We demonstrate the practicality and efficiency of our U2SSO solution by providing a complete proof-of-concept.