IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
27 April 2025
Weiqing Deng, Jianing Zhang, Haoyang Wang
Differential meet-in-the-middle (MITM) cryptanalysis, introduced by Boura et al. at CRYPTO 2023, has been demonstrated to be an effective technique for analyzing the security of block ciphers. In this paper, we introduce an improved parallel partitioning technique, and incorporate it into a new framework with a flexible key recovery strategy. This framework is applicable to both SPN and Feistel ciphers. We apply the new framework to SIMON and Piccolo-128 for demonstration. In particular, we also develop an MILP-based tool for searching full attacks on Piccolo-128. For SIMON48/96, we propose the best attack, reaching 26 rounds against 25 rounds previously. For other versions of SIMON, we extend the best previous differential attacks by one or two rounds. In the case of Piccolo-128, while no current differential attacks show promising results, our differential MITM attack reaches 15 rounds, matching the same number of rounds of the best impossible differential attack.
Xin Wang, Xiao Sui, Sisi Duan
Sharding is a generic approach to enhance the scalability of distributed systems. In recent years, many efforts have been made to scale the consensus mechanism of blockchains from sharding. A crucial research question is how to achieve the sweet spot of having a relatively small shard size (to achieve decent performance) while achieving an overwhelming probability of correctness (so the system is safe and live). Many recent works fall into the two-layer design that uses some coordinating shards to monitor the correctness of other shards (CCS 2022, NDSS 2024, INFOCOM 2023). All of them involve expensive communication costs between the shards, significantly degrading performance.
We present Otter, a scalable partially synchronous sharding-based Byzantine fault-tolerant atomic broadcast (ABC) protocol. We use coordinating shards in a completely new way. In particular, we randomly sample coordinating shards to directly participate in the consensus protocol. Such a random sampling mechanism makes it possible to analyze the correctness of the ABC protocol using a probabilistic model. In this way, we can significantly lower the shard size (informally, from over 1,200 in previous work to around 100) without lowering the probability of correctness. We also present a new notion called abortable fork detection (AFD) that might be of independent interest. Our evaluation results on Amazon EC2 using up to 1,000 replicas show that Otter achieves up to 4.38x the throughput of the state-of-the-art protocol.
We present Otter, a scalable partially synchronous sharding-based Byzantine fault-tolerant atomic broadcast (ABC) protocol. We use coordinating shards in a completely new way. In particular, we randomly sample coordinating shards to directly participate in the consensus protocol. Such a random sampling mechanism makes it possible to analyze the correctness of the ABC protocol using a probabilistic model. In this way, we can significantly lower the shard size (informally, from over 1,200 in previous work to around 100) without lowering the probability of correctness. We also present a new notion called abortable fork detection (AFD) that might be of independent interest. Our evaluation results on Amazon EC2 using up to 1,000 replicas show that Otter achieves up to 4.38x the throughput of the state-of-the-art protocol.
Toshihiro Suzuki, Hiroki Furue, Takuma Ito, Shuhei Nakamura, Shigenori Uchiyama
Multivariate public key cryptography (MPKC) is considered a promising candidate for post-quantum cryptography, with its security relying on the hardness of solving systems of multivariate quadratic equations.
Among MPKC schemes, the unbalanced oil and vinegar (UOV) and its variants have been actively studied. Pébereau and Luyten showed that the Kipnis–Shamir attack and the singular point attack can be described within the same framework using the Jacobian matrix.
In this study, we demonstrate that the rectangular MinRank attack can also be described within this framework. Furthermore, by leveraging this framework, we extend the feasible target ranks of the rectangular MinRank attack and use this extended attack to analyze the security of UOV and its variants. In conclusion, we confirm that the currently proposed parameters for UOV, MAYO, QR-UOV, and SNOVA are resistant to this attack.
Among MPKC schemes, the unbalanced oil and vinegar (UOV) and its variants have been actively studied. Pébereau and Luyten showed that the Kipnis–Shamir attack and the singular point attack can be described within the same framework using the Jacobian matrix.
In this study, we demonstrate that the rectangular MinRank attack can also be described within this framework. Furthermore, by leveraging this framework, we extend the feasible target ranks of the rectangular MinRank attack and use this extended attack to analyze the security of UOV and its variants. In conclusion, we confirm that the currently proposed parameters for UOV, MAYO, QR-UOV, and SNOVA are resistant to this attack.
Alexandru Cojocaru, Minki Hhan, Qipeng Liu, Takashi Yamakawa, Aaram Yun
In this work, we derive the first lifting theorems for establishing security in the quantum random permutation and ideal cipher models. These theorems relate the success probability of an arbitrary quantum adversary to that of a classical algorithm making only a small number of classical queries.
By applying these lifting theorems, we improve previous results and obtain new quantum query complexity bounds and post-quantum security results. Notably, we derive tight bounds for the quantum hardness of the double-sided zero search game and establish the post-quantum security for the preimage resistance, one-wayness, and multi-collision resistance of constant-round sponge, as well as the collision resistance of the Davies-Meyer construction.
By applying these lifting theorems, we improve previous results and obtain new quantum query complexity bounds and post-quantum security results. Notably, we derive tight bounds for the quantum hardness of the double-sided zero search game and establish the post-quantum security for the preimage resistance, one-wayness, and multi-collision resistance of constant-round sponge, as well as the collision resistance of the Davies-Meyer construction.
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Matan Shtepel
Recent work on IOP-based succinct arguments has focused on developing IOPs that improve prover efficiency by relying on linear-time encodable codes. We present two new schemes for improving the efficiency of such succinct arguments:
$\quad \bullet$ $\mathsf{FICS}$, an IOP of proximity for multilinear polynomial evaluation that, like prior work Blaze [EUROCRYPT 2025] achieves linear prover time, but additionally reduces the verifier oracle query complexity to $O(\lambda \log \log n + \log n)$ for codewords of length $n$. $\quad \bullet$ $\mathsf{FACS}$, an accumulation scheme for NP that achieves linear prover time and $O(\lambda)$ oracle queries per step of the accumulation.
Both schemes support a large class of linear-time encodable codes, including systematic LDPC codes and tensor codes of linear-time encodable codes.
We obtain our results by extending and formalizing the framework of Interactive Oracle Reductions (IORs) introduced by Ben-Sasson et al. [TCC 2019]. In particular, we develop new IORs for "codeswitching" tensor codes (Ron-Zewi and Rothblum [JACM 2024]), and also develop a new notion of knowledge soundness for IORs that allows us to easily compose IORs and to prove the security of our schemes in the non-interactive setting, even if the underlying codes are not known to be decodable in polynomial time.
$\quad \bullet$ $\mathsf{FICS}$, an IOP of proximity for multilinear polynomial evaluation that, like prior work Blaze [EUROCRYPT 2025] achieves linear prover time, but additionally reduces the verifier oracle query complexity to $O(\lambda \log \log n + \log n)$ for codewords of length $n$. $\quad \bullet$ $\mathsf{FACS}$, an accumulation scheme for NP that achieves linear prover time and $O(\lambda)$ oracle queries per step of the accumulation.
Both schemes support a large class of linear-time encodable codes, including systematic LDPC codes and tensor codes of linear-time encodable codes.
We obtain our results by extending and formalizing the framework of Interactive Oracle Reductions (IORs) introduced by Ben-Sasson et al. [TCC 2019]. In particular, we develop new IORs for "codeswitching" tensor codes (Ron-Zewi and Rothblum [JACM 2024]), and also develop a new notion of knowledge soundness for IORs that allows us to easily compose IORs and to prove the security of our schemes in the non-interactive setting, even if the underlying codes are not known to be decodable in polynomial time.
Max Duparc
We study the structure of theta structure on products of elliptic curves, detailing their construction through the symmetries induced by $4$-torsion points. In particular, we show how these symmetries allow the computation of theta structures projectively, thus avoiding the use of modular inversions.
Furthermore, we explore the self-similarity of the matrix representation of theta structures, arising from the action of the canonical $2$-torsion point in the Kummer line. Combined with the sparsity of certain $4$-torsion points, this structure leads to new formulae for computing gluing $(2,2)$ isogenies that require significantly fewer precomputations and arithmetic operations.
These new equations also naturally support the evaluation of points on the quadratic twist at negligible additional cost, without requiring operations in a field extension.
Katharina Boudgoust, Anamaria Costache
Threshold encryption schemes provide a common tool to secure a public-key encryption scheme against single point of failure attacks. Despite the success of lattices in building fully-homomorphic and presumably quantum-resistant encryption schemes, the task of thresholdizing those schemes remains challenging. The major bottleneck in the standard approach is the use of statistical noise flooding, leading to a significant efficiency loss and the need of stronger hardness assumptions. Recent works have replaced the heavy statistical noise flooding by a lighter one using the Rényi divergence. The new Rényi noise flooding both improves the efficiency and allows to use weaker hardness assumptions. However, arguing semantic security of lattice-based threshold schemes in the presence of Rényi noise flooding showed to be challenging. Chowdhury et al. (IACR ePrint'22) argued in the fully-homomorphic case that the Rényi divergence directly applies for semantic security by making use of an existing framework called public sampleability.
In this work, we argue that their public sampleability framework was neither sufficient nor correctly used. To address both issues, we strengthen the framework and thoroughly apply it to prove semantic security of generic lattice-based threshold encryption constructions. We distinguish between the plain public-key and the fully-homomorphic settings, as different security notions are achieved. As a byproduct, this shows that the proof detour via one-way security made by Boudgoust and Scholl (Asiacrypt'23) was superfluous, now leading to tighter proofs in the standard model.
In this work, we argue that their public sampleability framework was neither sufficient nor correctly used. To address both issues, we strengthen the framework and thoroughly apply it to prove semantic security of generic lattice-based threshold encryption constructions. We distinguish between the plain public-key and the fully-homomorphic settings, as different security notions are achieved. As a byproduct, this shows that the proof detour via one-way security made by Boudgoust and Scholl (Asiacrypt'23) was superfluous, now leading to tighter proofs in the standard model.
Vicent Esteve Voltes
Delegation of quantum computation in a trustful way is one of the most fundamental challenges toward the realization of future quantum cloud computing. While considerable progress has been made, no known protocol provides a purely classical client with universal delegated quantum computation while simultaneously ensuring blindness (input privacy), verifiability (soundness), and robustness against quantum noise—a feat that must be achieved under stringent cryptographic assumptions and with low overhead.
In this work, I introduce UVCQC, a new delegation framework that, for the first time, realizes a fully composable protocol for securely delegating quantum computations to an untrusted quantum server from a classical client. My scheme employs trap-based quantum authentication, post-quantum cryptographic commitments, and zero-knowledge proofs to provide full guarantees: the client remains purely classical; the server learns nothing about the computation; and any attempt to deviate from the specified circuit is detected with high probability.
I rigorously prove completeness, soundness, and perfect blindness of the protocol and demonstrate its universal composability against unbounded quantum adversaries. Furthermore, I propose a thermodynamically inspired verification mechanism based on energy dissipation and entropy change, enabling physically testable verification independent of cryptographic assumptions.
Beyond its core architecture, UVCQC is deeply intertwined with multidisciplinary frameworks: it admits a game-theoretic formulation where honesty is a Nash equilibrium, an information-theoretic treatment grounded in Holevo bounds, a categorical model via compact closed structures, and novel cryptographic enhancements based on isogeny-based primitives and topological invariants.
This research offers a scalable and unified solution to the blind and verifiable delegation problem, pushing forward the theoretical and practical frontiers of secure quantum computation—and opening a tangible path toward trustable quantum cloud services for classical users.
In this work, I introduce UVCQC, a new delegation framework that, for the first time, realizes a fully composable protocol for securely delegating quantum computations to an untrusted quantum server from a classical client. My scheme employs trap-based quantum authentication, post-quantum cryptographic commitments, and zero-knowledge proofs to provide full guarantees: the client remains purely classical; the server learns nothing about the computation; and any attempt to deviate from the specified circuit is detected with high probability.
I rigorously prove completeness, soundness, and perfect blindness of the protocol and demonstrate its universal composability against unbounded quantum adversaries. Furthermore, I propose a thermodynamically inspired verification mechanism based on energy dissipation and entropy change, enabling physically testable verification independent of cryptographic assumptions.
Beyond its core architecture, UVCQC is deeply intertwined with multidisciplinary frameworks: it admits a game-theoretic formulation where honesty is a Nash equilibrium, an information-theoretic treatment grounded in Holevo bounds, a categorical model via compact closed structures, and novel cryptographic enhancements based on isogeny-based primitives and topological invariants.
This research offers a scalable and unified solution to the blind and verifiable delegation problem, pushing forward the theoretical and practical frontiers of secure quantum computation—and opening a tangible path toward trustable quantum cloud services for classical users.
24 April 2025
Hemin Rahimi, Amir Moradi
Safeguarding cryptographic implementations against the increasing threat of Side-Channel Analysis (SCA) attacks is essential. Masking, a countermeasure that randomizes intermediate values, is a cornerstone of such defenses. In particular, SCA-secure implementation of AES, the most-widely used encryption standard, can employ Boolean masking as well as multiplicative masking due to its underlying Galois field operations. However, multiplicative masking is susceptible to vulnerabilities, including the zero-value problem, which has been identified right after theintroduction of multiplicative masking. At CHES 2018, De Meyer et al. proposed a hardware-based approach to manage these challenges and implemented multiplicative masking for AES, incorporating a Kronecker delta function and randomness optimization.
In this work, we evaluate their design using the PROLEAD evaluation tool under the glitch- and transition-extended probing model. Our findings reveal a critical vulnerability in their first- and second-order implementation of the Kronecker delta function, stemming from the employed randomness optimization. This leakage compromises the security of their presented masked AES Sbox. After pinpointing the source of such a leakage, we propose an alternative randomness optimization for the first-order design to address this issue, and demonstrate its effectiveness through rigorous evaluations by means of PROLEAD.
Alex B. Grilo, Álvaro Yángüez
While one-way functions (OWFs) serve as the minimal assumption for computational cryptography in the classical setting, in quantum cryptography, we have even weaker cryptographic assumptions such as pseudo-random states, and EFI pairs, among others. Moreover, the minimal assumption for computational quantum cryptography remains an open question. Recently, it has been shown that pseudoentanglement is necessary for the existence of quantum cryptography (Goulão and Elkouss 2024), but no cryptographic construction has been built from it.
In this work, we study the cryptographic usefulness of quantum pseudoresources —a pair of families of quantum states that exhibit a gap in their resource content yet remain computationally indistinguishable. We show that quantum pseudoresources imply a variant of EFI pairs, which we call EPFI pairs, and that these are equivalent to quantum commitments and thus EFI pairs. Our results suggest that, just as randomness is fundamental to classical cryptography, quantum resources may play a similarly crucial role in the quantum setting.
Finally, we focus on the specific case of entanglement, analyzing different definitions of pseudoentanglement and their implications for constructing EPFI pairs. Moreover, we propose a new cryptographic functionality that is intrinsically dependent on entanglement as a resource.
In this work, we study the cryptographic usefulness of quantum pseudoresources —a pair of families of quantum states that exhibit a gap in their resource content yet remain computationally indistinguishable. We show that quantum pseudoresources imply a variant of EFI pairs, which we call EPFI pairs, and that these are equivalent to quantum commitments and thus EFI pairs. Our results suggest that, just as randomness is fundamental to classical cryptography, quantum resources may play a similarly crucial role in the quantum setting.
Finally, we focus on the specific case of entanglement, analyzing different definitions of pseudoentanglement and their implications for constructing EPFI pairs. Moreover, we propose a new cryptographic functionality that is intrinsically dependent on entanglement as a resource.
Gorjan Alagic, Joseph Carolan, Christian Majenz, Saliha Tokat
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Gennaro Avitabile, Vincenzo Botta, Dario Fiore
Traceable ring signatures enhance ring signatures by adding an accountability layer. Specifically, if a party signs two different messages within the protocol, their identity is revealed. Another desirable feature is $\textit{extendability}$. In particular, $\textit{extendable threshold}$ ring signatures (ETRS) allow to $\textit{non-interactively}$ update already finalized signatures by enlarging the ring or the set of signers.
Combining traceability and extendability in a single scheme is unexplored and would offer a new tool for privacy-preserving voting schemes in scenarios where the voters are not known in advance. In this paper, we show how to reconcile both properties by introducing and constructing a new cryptographic primitive called Tetris. Notably, our Tetris construction simultaneously achieves strong anonymity and linear-size signatures, which is the main technical challenge in existing techniques. To solve this challenge, we develop a new approach to traceability that leads to several conceptual and technical contributions. Among those, we introduce and construct, based on Groth-Sahai proofs, $\textit{extendable}$ shuffle arguments that can be $\textit{non-interactively}$ updated by several provers.
Combining traceability and extendability in a single scheme is unexplored and would offer a new tool for privacy-preserving voting schemes in scenarios where the voters are not known in advance. In this paper, we show how to reconcile both properties by introducing and constructing a new cryptographic primitive called Tetris. Notably, our Tetris construction simultaneously achieves strong anonymity and linear-size signatures, which is the main technical challenge in existing techniques. To solve this challenge, we develop a new approach to traceability that leads to several conceptual and technical contributions. Among those, we introduce and construct, based on Groth-Sahai proofs, $\textit{extendable}$ shuffle arguments that can be $\textit{non-interactively}$ updated by several provers.
Jaeseon Kim, Jeongeun Park, Hyewon Sung
Private information retrieval (PIR) enables a client to retrieve data from a server while preserving the confidentiality of the client's query. When PIR is instantiated with fully homomorphic encryption (FHE), the protocol becomes non-interactive, requiring only a query-answer exchange, and it achieves asymptotically optimal communication and computation complexity. Although several FHE-based PIR protocols have been practically implemented with the desired properties, there has been little detailed comparison among them. As a result, it remains unclear which protocol is most efficient in practice with respect to various aspects such as performance and scalability.
In this paper, we revisit existing protocols by categorizing them into two different structures in order to analyze the advantages and disadvantages of each class in detail, with a focus on practical implementations. Furthermore, we introduce and compare various homomorphic algorithms that can be utilized for query optimization, discussing the strengths and limitations of each. Finally, with the goal of identifying the most efficient protocol in terms of computational cost and memory usage, based on database size. Additionally, we address common misconceptions that may lead to inefficient choices in real-world deployment scenarios and offer the best solutions. Consequently, our analysis and experimental results demonstrate that the less-explored design achieves a 90% reduction in communication cost and an 8× decrease in computational overhead compared to the other one, challenging the common misconception.
In this paper, we revisit existing protocols by categorizing them into two different structures in order to analyze the advantages and disadvantages of each class in detail, with a focus on practical implementations. Furthermore, we introduce and compare various homomorphic algorithms that can be utilized for query optimization, discussing the strengths and limitations of each. Finally, with the goal of identifying the most efficient protocol in terms of computational cost and memory usage, based on database size. Additionally, we address common misconceptions that may lead to inefficient choices in real-world deployment scenarios and offer the best solutions. Consequently, our analysis and experimental results demonstrate that the less-explored design achieves a 90% reduction in communication cost and an 8× decrease in computational overhead compared to the other one, challenging the common misconception.
Ole Hylland Spjeldnæs
SNAIL (Succinct, Non-interactive, Alon-compressed, Instant argument for Layered circuits) turns any depth-\(d\) arithmetic circuit into a non-interactive argument whose prover runs within
\(1 + c(d,k,n)\) of plain circuit execution, where
\(c(d,k,n) = \frac{3\,(k+n+1)}{k\,d + n + 1}\).
For the representative choice \(k = n = 4\) and \(24 \le d \le 32\) this means only 21–28 % overhead.
Core idea: A constant-round zerocheck based on a difference-driven Alon decomposition compresses all \(d\) layers into one multivariate identity. Fiat–Shamir in the random-oracle model yields a transparent argument whose hot loop uses only field additions and multiplications (no MSMs, FFTs, or Merkle trees).
Cost summary: with \(k = n = 4\) and \(24 \le d \le 32\) \[ T_{\text{prov}}(S,d)=\Bigl(1+\frac{6.75}{d}\Bigr)S,\qquad |\pi|=(d^{2}+2d)(4d+1),\qquad \varepsilon_{\text{sound}}=\frac{4nd}{|\mathbb F|}. \]
A 2.6-billion-gate trace at \(d = 32\) is proven in \(\approx 300\;\text{ms}\) on an RTX-4090 and verified on a smartphone in \(< 1\;\text{ms}\).
Width scalability: Proof size depends only on depth, so widening to billions of parallel gates leaves the verifier unchanged—ideal for ultra-wide workloads such as live-audited VM traces.
Security: The six-round interactive core is statistically sound; the Fiat–Shamir variant is computationally sound in the random-oracle model and needs no trusted setup.
Implications: SNAIL shows that a prover can run near real-time on commodity GPUs while emitting proofs small enough for mobile light clients, enabling applications such as live-streamed provable gaming, pay-per-cycle cloud nodes, and always-on verifiable CPUs.
Core idea: A constant-round zerocheck based on a difference-driven Alon decomposition compresses all \(d\) layers into one multivariate identity. Fiat–Shamir in the random-oracle model yields a transparent argument whose hot loop uses only field additions and multiplications (no MSMs, FFTs, or Merkle trees).
Cost summary: with \(k = n = 4\) and \(24 \le d \le 32\) \[ T_{\text{prov}}(S,d)=\Bigl(1+\frac{6.75}{d}\Bigr)S,\qquad |\pi|=(d^{2}+2d)(4d+1),\qquad \varepsilon_{\text{sound}}=\frac{4nd}{|\mathbb F|}. \]
A 2.6-billion-gate trace at \(d = 32\) is proven in \(\approx 300\;\text{ms}\) on an RTX-4090 and verified on a smartphone in \(< 1\;\text{ms}\).
Width scalability: Proof size depends only on depth, so widening to billions of parallel gates leaves the verifier unchanged—ideal for ultra-wide workloads such as live-audited VM traces.
Security: The six-round interactive core is statistically sound; the Fiat–Shamir variant is computationally sound in the random-oracle model and needs no trusted setup.
Implications: SNAIL shows that a prover can run near real-time on commodity GPUs while emitting proofs small enough for mobile light clients, enabling applications such as live-streamed provable gaming, pay-per-cycle cloud nodes, and always-on verifiable CPUs.
23 April 2025
We are proud to announce the winners of the 2025 IACR Test-of-Time Award for Eurocrypt.
The IACR Test-of-Time Award honors papers published at the 3 IACR flagship conferences 15 years ago which have had a lasting impact on the field.
The Test-of-Time award for Eurocrypt 2010 is awarded to the following paper:
On Ideal Lattices and Learning with Errors over Rings, by Vadim Lyubashevsky, Chris Peikert and Oded Regev, for introducing the Ring-LWE problem, laying the theoretical and practical foundation for efficient lattice-based cryptography.
For more information, see https://www.iacr.org/testoftime.
Congratulations to the winners!
The Test-of-Time award for Eurocrypt 2010 is awarded to the following paper:
On Ideal Lattices and Learning with Errors over Rings, by Vadim Lyubashevsky, Chris Peikert and Oded Regev, for introducing the Ring-LWE problem, laying the theoretical and practical foundation for efficient lattice-based cryptography.
For more information, see https://www.iacr.org/testoftime.
Congratulations to the winners!
Atsuki Momose, Kailun Qin, Ao Sakurai, Mona Vij
Confidential Computing-as-a-Service has gained significant attention in recent years, driven by rapid advances in Trusted Execution Environment (TEE) technology. Among various architectures, confidential serverless computing has emerged as a promising model. A common approach to designing confidential serverless computing involves decoupling the client workload from the initial enclave image and dynamically provisioning the workload at runtime. This enables both offloading the costly enclave bootstrapping and maintaining a fixed reference measurement for attestation. This decoupling necessitates nested attestation, where the client’s workload is attested via a custom attestation module embedded in a platform-attested enclave established at boot time. The challenge in designing nested attestation, however, is to distinguish fresh enclaves from the used ones to prevent enclave reuse. Specifically, a previously used enclave may be compromised and its attestation module tampered with. If the enclave is reused for another workload, it could bypass runtime attestation, allowing unverified code to execute.
In this paper, we present a novel approach to securing nested attestation for confidential serverless computing on Intel Software Guard Extensions (Intel SGX). Unlike prior works, our approach does not rely on intra-enclave isolation techniques to sandbox the client’s workload. Instead, we leverage the Key Separation and Sharing (KSS) feature of Intel SGX to track and prevent enclave reuse based on its immutable IDs. We develop a prototype system for confidential serverless computing for Python workloads, incorporating our nested attestation scheme, and present an empirical evaluation of its performance.
We believe our scheme unveils a unique yet previously overlooked role of KSS---post-compromise identification, with broader implications beyond serverless computing. As a concrete example, we demonstrate how KSS can be leveraged to implement an enclave-binding TPM on Intel SGX, which is of independent interest.
In this paper, we present a novel approach to securing nested attestation for confidential serverless computing on Intel Software Guard Extensions (Intel SGX). Unlike prior works, our approach does not rely on intra-enclave isolation techniques to sandbox the client’s workload. Instead, we leverage the Key Separation and Sharing (KSS) feature of Intel SGX to track and prevent enclave reuse based on its immutable IDs. We develop a prototype system for confidential serverless computing for Python workloads, incorporating our nested attestation scheme, and present an empirical evaluation of its performance.
We believe our scheme unveils a unique yet previously overlooked role of KSS---post-compromise identification, with broader implications beyond serverless computing. As a concrete example, we demonstrate how KSS can be leveraged to implement an enclave-binding TPM on Intel SGX, which is of independent interest.
Alper Çakan, Vipul Goyal, Omri Shmueli
Quantum fire was recently formalized by Bostanci, Nehoran and Zhandry (STOC’25). This notion considers a distribution of quantum states that can be efficiently cloned, but cannot be converted into a classical string. Previously, work of Nehoran and Zhandry (ITCS’24) showed how to construct quantum fire relative to an inefficient unitary oracle. Later, the work of Bostanci, Nehoran, Zhandry gave a candidate construction based on group action assumptions, and proved the correctness of their scheme; however, even in the classical oracle model they only conjectured the security, and no security proof was given.
In this work, we give the first construction of public-key quantum fire relative to a classical oracle, and prove its security unconditionally. This gives the first classical oracle seperation between the two fundamental principles of quantum mechanics that are equivalent in the information-theoretic setting: no-cloning and no-telegraphing.
Going further, we introduce a stronger notion called quantum key-fire where the clonable fire states can be used to run a functionality (such as a signing or decryption key), and prove a secure construction relative to a classical oracle. As an application of this notion, we get the first public-key encryption scheme whose secret key is clonable but satisfies unbounded leakage-resilience (Cakan, Goyal, Liu-Zhang, Ribeiro [TCC’24]), relative to a classical oracle. Unbounded leakage-resilience is closely related to, and can be seen as a generalization of the notion of no-telegraphing.
For all of our constructions, the oracles can be made efficient (i.e. polynomial time), assuming the existence of post-quantum one-way functions
In this work, we give the first construction of public-key quantum fire relative to a classical oracle, and prove its security unconditionally. This gives the first classical oracle seperation between the two fundamental principles of quantum mechanics that are equivalent in the information-theoretic setting: no-cloning and no-telegraphing.
Going further, we introduce a stronger notion called quantum key-fire where the clonable fire states can be used to run a functionality (such as a signing or decryption key), and prove a secure construction relative to a classical oracle. As an application of this notion, we get the first public-key encryption scheme whose secret key is clonable but satisfies unbounded leakage-resilience (Cakan, Goyal, Liu-Zhang, Ribeiro [TCC’24]), relative to a classical oracle. Unbounded leakage-resilience is closely related to, and can be seen as a generalization of the notion of no-telegraphing.
For all of our constructions, the oracles can be made efficient (i.e. polynomial time), assuming the existence of post-quantum one-way functions
Jiangshan Long, Changhai Ou, Yukun Cheng, Kexin Qiao, Wei Cheng, Fan Zhang
Side-channel analysis recovers a secret by exploiting the key-dependent characteristics of the leakages. Practical techniques, such as Distance-of-Means analysis (DoM), Kolmogorov-Smirnov analysis (KSA) and Cramér-von Mises analysis (CvMA), provide valuable insights about the secret from the indirect perspectives of statistical moment and cumulative distribution function (CDF) respectively, circumventing the direct and costly estimation of leakage probability densities and therefore enabling wider applicability in practice. Though both the perspectives are informative, their relationships in the context of side-channel analysis remain unclear. In other words, the fundamental questions of "which one is better?'' and ``why and under what circumstances?" leave as open problems. In this paper, we introduce the probability-probability (PP) plot in statistics as a common framework for explaining the mathematical foundations of CDF-based techniques, which facilitates an intuitive understanding of different variant strategies. Then, inspired by the growth pattern of the PP curve, we propose a novel distinguisher based on the famous Mann-Kendall test, where measurements are managed with ordinality and nominality. This goodness-of-fit test checks whether a key-dependent binary sequence originates from a random binomial distribution, by efficiently searching potential label clusters. Finally, we explore the symmetry and dual counterpart of CDF in mathematics, introducing the quantile-quantile (QQ) plot and develop an interesting technique based on the inverse cumulative distribution function (ICDF). We present a general discussion of its bridging role, regarding detail capture as well as signal-to-noise ratio (SNR). On this basis, we establish the relationships among moment-based, ICDF-based, and CDF-based techniques, which additionally allows for bounding the security level of the CDF-based techniques using well-established metrics that are originally proposed for evaluating the traditional moment-based family. Experiments across various settings validate our theoretical findings and demonstrate the effectiveness of the two proposed distinguishers.
Daniel Alabi, Sainyam Galhotra, Shagufta Mehnaz, Zeyu Song, Eugene Wu
Data markets play a pivotal role in modern industries by facilitating the exchange of data for predictive modeling, targeted marketing, and research. However, as data becomes a valuable commodity, privacy and security concerns have grown, particularly regarding the personal information of individuals. This tutorial explores privacy and security issues when integrating different data sources in data market platforms. As motivation for the importance of enforcing privacy requirements, we discuss attacks on data markets focusing on membership inference and reconstruction attacks. We also discuss security vulnerabilities in decentralized data marketplaces, including adversarial manipulations by buyers or sellers. We provide an overview of privacy and security mechanisms designed to mitigate these risks. In order to enforce the least amount of trust for buyers and sellers, we focus on distributed protocols. Finally, we conclude with opportunities for future research on understanding and mitigating privacy and security concerns in distributed data markets.
Krzysztof Pietrzak, Pengxiang Wang
Truncation of cryptographic outputs is a technique that was recently introduced in Baldimtsi et al. [BCCK22]. The general idea is to try out many inputs to some cryptographic algorithm until the output (e.g. a public-key or some hash value) falls into some sparse set and thus can be compressed: by trying out an expected $2^k$ different inputs one will find an output that starts with $k$ zeros.
Using such truncation one can for example save substantial gas fees on Blockchains where storing values is very expensive. While [BCCK22] show that truncation preserves the security of the underlying primitive, they only consider a setting without preprocessing. In this work we show that lower bounds on the time-space tradeoff for inverting random functions and permutations also hold with truncation, except for parameters ranges where the bound fails to hold for ''trivial'' reasons.
Concretely, it's known that any algorithm that inverts a random function or permutation with range $N$ making $T$ queries and using $S$ bits of auxiliary input must satisfy $S\cdot T\ge N\log N$. This lower bound no longer holds in the truncated setting where one must only invert a challenge from a range of size $N/2^k$, as now one can simply save the replies to all $N/2^k$ challenges, which requires $S=\log N\cdot N /2^k$ bits and allows to invert with $T=1$ query.
We show that with truncation, whenever $S$ is somewhat smaller than the $\log N\cdot N /2^k$ bits required to store the entire truncated function table, the known $S\cdot T\ge N\log N$ lower bound applies.
Using such truncation one can for example save substantial gas fees on Blockchains where storing values is very expensive. While [BCCK22] show that truncation preserves the security of the underlying primitive, they only consider a setting without preprocessing. In this work we show that lower bounds on the time-space tradeoff for inverting random functions and permutations also hold with truncation, except for parameters ranges where the bound fails to hold for ''trivial'' reasons.
Concretely, it's known that any algorithm that inverts a random function or permutation with range $N$ making $T$ queries and using $S$ bits of auxiliary input must satisfy $S\cdot T\ge N\log N$. This lower bound no longer holds in the truncated setting where one must only invert a challenge from a range of size $N/2^k$, as now one can simply save the replies to all $N/2^k$ challenges, which requires $S=\log N\cdot N /2^k$ bits and allows to invert with $T=1$ query.
We show that with truncation, whenever $S$ is somewhat smaller than the $\log N\cdot N /2^k$ bits required to store the entire truncated function table, the known $S\cdot T\ge N\log N$ lower bound applies.