International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

30 April 2025

Panos Kampanakis, Shai Halevi, Nevine Ebeid, Matt Campagna
ePrint Report ePrint Report
AES-GCM has been the status quo for efficient symmetric encryption for decades. As technology and cryptographic applications evolved over time, AES-GCM has posed some challenges to certain use-cases due to its default 96-bit nonce size, 128-bit block size, and lack of key commitment. Nonce-derived schemes are one way of addressing these challenges: Such schemes derive multiple keys from nonce values, then apply standard AES-GCM with the derived keys (and possibly another 96-bit nonce). The approach overcomes the nonce length and data limit issues since each derived key is only used to encrypt a few messages. By itself, the use of nonce-derived keys does not address key commitment, however. Some schemes chose to include a built-in key commitment mechanism, while others left it out of scope.

In this work, we explore efficient key commitment methods that can be added to any nonce-derived scheme in a black-box manner. Our focus is on options that use the underlying block cipher and no other primitive, are efficient, and only use standard primitives which are FIPS-approved. For concreteness we focus here specifically on adding key commitment to XAES-256-GCM, a nonce-scheme originally proposed by Filippo Valsorda, but these methods can be adapted to any other nonce-derived scheme. We propose an efficient CMAC-based key commitment solution, and prove its security in the ideal-cipher model. We argue that adding this solution yields a FIPS-compliant mode, quantify the data and message length limits of this mode and compare this combination to other nonce-derived modes. We also benchmark our key committing XAES-256-GCM performance.
Expand
Pascal Giorgi, Fabien Laguillaumie, Lucas Ottow, Damien Vergnaud
ePrint Report ePrint Report
Threshold public-key encryption securely distributes private key shares among multiple participants, requiring a minimum number of them to decrypt messages. We introduce a quantum-resistant threshold public-key encryption scheme based on the code-based Niederreiter cryptosystem that achieves security against chosen ciphertext attacks. A previous attempt was made recently by Takahashi, Hashimoto, and Ogata (published at DCC in 2023) but we show that it contains a critical security flaw that allow adversaries to exploit malformed ciphertexts to gain information about the secret key.

In this work, we formalize a generic conversion enhancing security of (classical) public-key encryption from one-wayness against passive attacks to indistinguishability against chosen-ciphertext attacks. The conversion uses a non-interactive zero-knowledge argument with strong security properties to ensure ciphertext well-formedness. We then provide an instantiation for Niederreiter encryption based on recent techniques introduced in the "MPC-in-the-head" paradigm. The publicly verifiable validity of ciphertexts makes this scheme suitable for threshold public-key encryption and prevents an attack similar to the one on Takahashi-Hashimoto-Ogata scheme. To improve the multi-party computation protocol for decryption (involving secure computations on polynomials), we introduce a field-switching technique that allows to significantly reduce the shared secret key size and computational overhead.
Expand
Xue Yang, Ruida Wang, Depan Peng, Kun Liu, Xianhui Lu, Xiaohu Tang
ePrint Report ePrint Report
This work addresses the hintless single-server Private Information Retrieval (PIR) from the perspective of high-level protocol design and introduces PIRCOR and PIRCOR$^{*}$ that outperform the state-of-the-art PIRANA (Liu et. al., IEEE S&P 2024) and YPIR (Menon and Wu, USENIX Security 2024) in terms of the query size and the query generation time. In PIRCOR, we construct an efficient Rotation-based Expanded Binary Code (REBC) to expand $\alpha$ primary codewords into $\beta$ expanded codewords by the Rotation-Mutual-Multiplication operation. By leveraging the innovative REBC, PIRCOR reduces the query size for single-query PIR by a factor of $\mathcal{O}\left(N^{\frac{\delta-1}{\delta}}\right)$ compared to PIRANA, while also avoiding the $\mathcal{O}(N +\frac{|\mathrm{DB}|}{N})$ linear scaling inherent in YPIR ($N$, $\delta$ and $|\mathrm{DB}|$ are the (R)LWE secret dimension, the number of codewords with a Hamming weight of $1$ and the number of database elements). Based on PIRCOR, we further present PIRCOR$^{*}$ by additionally introducing the Rotation-self-Multiplication operation, which achieves a $\mathbf{50\%}$ reduction in rotation operations and a smaller query size when $\delta = 2$. Building upon PIRCOR and PIRCOR$^{*}$, we further propose their optimized variants, PIRCOR-op and PIRCOR$^{*}$-op, to further reduce the online response time. Similar to YPIR that leverage pre-processing, PIRCOR-op and PIRCOR$^{*}$-op allow all rotations and part of multiplications to be carried out in an offline stage before receiving the query. Additionally, we also design FHE-operator acceleration with leveled optimization and implementation optimization of ciphertext rotation. For 8 KB element retrieval in an 8 GB database, PIRCOR achieves a $\mathbf{10.7\times}$ query size reduction compared to PIRANA. When benchmarked against YPIR, the improvements are even more striking: PIRCOR reduces the query size by $\mathbf{26.8\times}$ and accelerates query generation by a staggering $\mathbf{6,080\times}$. Notably, the enhanced PIRCOR$^{*}$ achieves a $\mathbf{53.6\times}$ reduction in query size compared to YPIR, while improving query generation time by an impressive $\mathbf{12,160\times}$.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the data aggregation scheme [IEEE TDSC, 2023, 20(3), 2011-2024] is flawed, because the signer only signs a part of data, not the whole data. An adversary can replace the unsigned component to cheat the verifier. To frustrate this attack, all components of the target data should be concatenated together and then be hashed and signed, so as to ensure that the signature verification can prove the whole message integrity.
Expand
Vasyl Ustimenko, Tymoteusz Chojecki
ePrint Report ePrint Report
Let us assume that one of two trusted parties (administrator) manages the information system (IS) and another one (user) is going to use the resources of this IS during the certain time interval. So they need establish secure user’s access password to the IS resources of this system via selected authenticated key exchange protocol. So they need to communicate via insecure communication channel and secretly con-struct a cryptographically strong session key that can serve for the establishment of secure passwords in the form of tuples in certain alphabet during the certain time interval. Nowadays selected protocol has to be postquantum secure. We propose the implementation of this scheme in terms of Symbolic Computa-tions. The key exchange protocol is one of the key exchange algorithms of Noncommutative Cryptography with the platform of multivariate transformation of the affine space over selected finite commutative ring. The session key is a multivariate map on the affine space. Platforms and multivariate maps are construct-ed in terms of Algebraic Graph Theory.
Expand

28 April 2025

Benedikt Bünz, Alessandro Chiesa, Giacomo Fenzi, William Wang
ePrint Report ePrint Report
Proof-carrying data (PCD) is a powerful cryptographic primitive for computational integrity in a distributed setting. State-of-the-art constructions of PCD are based on accumulation schemes (and, closely related, folding schemes). We present WARP, the first accumulation scheme with linear prover time and logarithmic verifier time. Our scheme is hash-based (secure in the random oracle model), plausibly post-quantum secure, and supports unbounded accumulation depth. We achieve our result by constructing an interactive oracle reduction of proximity that works with any linear code over a sufficiently large field. We take a novel approach by constructing a straightline extractor that relies on erasure correction, rather than error-tolerant decoding like prior extractors. Along the way, we introduce a variant of straightline round-by-round knowledge soundness that is compatible with our extraction strategy.
Expand
Gulshan Kumar, Rahul Saha, Mauro Conti, William J Buchanan
ePrint Report ePrint Report
Smart contracts are integral to decentralized systems like blockchains and enable the automation of processes through programmable conditions. However, their immutability, once deployed, poses challenges when addressing errors or bugs. Existing solutions, such as proxy contracts, facilitate upgrades while preserving application integrity. Yet, proxy contracts bring issues such as storage constraints and proxy selector clashes - along with complex inheritance management. This paper introduces a novel upgradeable smart contract framework with version control, named "decentraLized vErsion control and updAte manaGement in upgrAdeable smart coNtracts (LEAGAN)." LEAGAN is the first decentralized updatable smart contract framework that employs data separation with Incremental Hash (IH) and Revision Control System (RCS). It updates multiple contract versions without starting anew for each update, and reduces time complexity, and where RCS optimizes space utilization through differentiated version control. LEAGAN also introduces the first status contract in upgradeable smart contracts, and which reduces overhead while maintaining immutability. In Ethereum Virtual Machine (EVM) experiments, LEAGAN shows 40\% better space utilization, 30\% improved time complexity, and 25\% lower gas consumption compared to state-of-the-art models. It thus stands as a promising solution for enhancing blockchain system efficiency.
Expand
Eyal Kushnir, Hayim Shaul
ePrint Report ePrint Report
Range counting is the problem of preprocessing a set $P\subset R^d$ of $n$ points, such that given a query range $\gamma$ we can efficiently compute $|P\cap\gamma|$. In the more general range searching problem the goal is to compute $f(P\cap\gamma)$, for some function $f$.

It was already shown (Kushnir et al. PETS'24) how to efficiently answer a range searching query under FHE using a technique they called Copy-and-Recurse to traverse partition trees.

In the Range emptiness problem the goal is to compute only whether $P\cap\gamma =\emptyset$. This was shown (in plaintext) to be done more efficiently. Range emptiness is interesting on its own and also used as a building block in other algorithms.

In this paper we improve and extend the results of Kushnir et al. First, for range searching we reduce the overhead term to the optimal $O(n)$, so for example if the ranges are halfspaces in $R^d$ bounded by hyperplanes then range searching can be done with a circuit of size $O(t\cdot n^{1-1/d+\varepsilon}+n)$, where $t$ is the size of the sub-circuit that checks whether a point lies under a hyperplane.

Second, we introduce a variation of copy-and-recurse that we call leveled copy-and-recurse. With this variation we improve range searching in the 1-dimensional case as well as traversal of other trees (e.g., binary trees and B-trees). Third, we show how to answer range emptiness queries under FHE more efficiently than range counting.

We implemented our algorithms and show that our techniques for range emptiness yield a solution that is $\times 3.6$ faster than the previous results for a database of $2^{25}$ points.
Expand
Gustaf Ahlgren, Onur Gunlu
ePrint Report ePrint Report
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.
Expand
Ruihao Dai, Jiankuo Dong, Mingrui Qiu, Zhenjiang Dong, Fu Xiao, Jingqiang Lin
ePrint Report ePrint Report
Quantum computers leverage qubits to solve certain computational problems significantly faster than classical computers. This capability poses a severe threat to traditional cryptographic algorithms, leading to the rise of post-quantum cryptography (PQC) designed to withstand quantum attacks. FALCON, a lattice-based signature algorithm, has been selected by the National Institute of Standards and Technology (NIST) as part of its post-quantum cryptography standardization process. However, due to the computational complexity of PQC, especially in cloud-based environments, throughput limitations during peak demand periods have become a bottleneck, particularly for FALCON. In this paper, we introduce GOLF (GPU-accelerated Optimization for Lattice-based FALCON), a novel GPU-based parallel acceleration framework for FALCON. GOLF includes algorithm porting to the GPU, compatibility modifications, multi-threaded parallelism with distinct data, single-thread optimization for single tasks, and specific enhancements to the Fast Fourier Transform (FFT) module within FALCON. Our approach achieves unprecedented performance in FALCON acceleration on GPUs. On the NVIDIA RTX 4090, GOLF reaches a signature generation throughput of 42.02 kops/s and a signature verification throughput of 10,311.04 kops/s. These results represent a 58.05$\times$ / 73.14$\times$ improvement over the reference FALCON implementation and a 7.17$\times$ / 3.79$\times$ improvement compared to the fastest known GPU implementation to date. GOLF demonstrates that GPU acceleration is not only feasible for post-quantum cryptography but also crucial for addressing throughput bottlenecks in real-world applications.
Expand
Wen Wu, Jiankuo Dong, Zhen Xu, Zhenjiang Dong, Dung Duong, Fu Xiao, Jingqiang Lin
ePrint Report ePrint Report
The Classic McEliece key encapsulation mechanism (KEM), a candidate in the fourth-round post-quantum cryptography (PQC) standardization process by the National Institute of Standards and Technology (NIST), stands out for its conservative design and robust security guarantees. Leveraging the code-based Niederreiter cryptosystem, Classic McEliece delivers high-performance encapsulation and decapsulation, making it well-suited for various applications. However, there has not been a systematic implementation of Classic McEliece on GPU platforms. This paper presents the first high-performance implementation of Classic McEliece on NVIDIA GPUs. Firstly, we present the first GPU-based implementation of Classic McEliece, utilizing a ``CPU-GPU'' heterogeneous approach and a kernel fusion strategy. We significantly reduce global memory accesses, optimizing memory access patterns. This results in encapsulation and decapsulation performance of 28,628,195 ops/s and 3,051,701 ops/s, respectively, for McEliece348864. Secondly, core operations like Additive Fast Fourier Transforms (AFFT), and Transpose AFFT (TAFFT) are optimized. We introduce the concept of the (T)AFFT stepping chain and propose two universal schemes: Memory Access Stepping Strategy (MASS) and Layer-Fused Memory Access Stepping Strategy (LFMASS), which achieve a speedup of 30.56% and 38.37%, respectively, compared to the native GPU-based McEliece6960119 implementation. Thirdly, extensive experiments on the NVIDIA RTX4090 show significant performance gains, achieving up to 344$\times$ higher encapsulation and 125$\times$ higher decapsulation compared to the official CPU-based AVX implementation, decisively outperforming existing ARM Cortex-M4 and FPGA implementations.
Expand
Obrochishte, Bulgaria, 1 June - 16 June 2025
Event Calendar Event Calendar
Event date: 1 June to 16 June 2025
Expand
Indian Institute Of Technology Indore, India, 16 December - 20 December 2025
Event Calendar Event Calendar
Event date: 16 December to 20 December 2025
Submission deadline: 10 July 2025
Notification: 30 September 2025
Expand
Hanoi, Vietnam, 26 August 2025
Event Calendar Event Calendar
Event date: 26 August 2025
Submission deadline: 25 April 2025
Notification: 17 May 2025
Expand
Eindhoven, Netherlands, 6 June -
Event Calendar Event Calendar
Event date: 6 June to
Expand

27 April 2025

Alexey S. Zelenetsky, Peter G. Klyucharev
ePrint Report ePrint Report
This work introduces Zemlyanika, a post-quantum IND-CCA secure key encapsulation mechanism based on the Module-LWE problem. The high-level design of Zemlyanika follows a well-known approach where a passively secure public-key encryption scheme is transformed into an actively secure key encapsulation mechanism using the Fujisaki-Okamoto transform.

Our scheme features three main elements: a power-of-two modulus, explicit rejection, and revised requirements for decapsulation error probability.

The choice of a power-of-two modulus is atypical for Module-LWE based schemes due to the unavailability of Number Theoretic Transform (NTT). However, we argue that this option offers advantages that are often underestimated. We employ explicit rejection because it is more efficient than implicit rejection. Recent works show that both types of rejection are equally secure, so we do not reduce the security by this choice. Finally, we present compelling arguments that the probability of decapsulation failure may be higher than commonly accepted. This allows us to increase performance and security against attacks on the Module-LWE.
Expand
Krishnendu Chatterjee, Seth Gilbert, Stefan Schmid, Jakub Svoboda, Michelle Yeo
ePrint Report ePrint Report
Liquid democracy is a transitive vote delegation mechanism over voting graphs. It enables each voter to delegate their vote(s) to another better-informed voter, with the goal of collectively making a better decision. The question of whether liquid democracy outperforms direct voting has been previously studied in the context of local delegation mechanisms (where voters can only delegate to someone in their neighbourhood) and binary decision problems. It has previously been shown that it is impossible for local delegation mechanisms to outperform direct voting in general graphs. This raises the question: for which classes of graphs do local delegation mechanisms yield good results?

In this work, we analyse (1) properties of specific graphs and (2) properties of local delegation mechanisms on these graphs, determining where local delegation actually outperforms direct voting. We show that a critical graph property enabling liquid democracy is that the voting outcome of local delegation mechanisms preserves a sufficient amount of variance, thereby avoiding situations where delegation falls behind direct voting. These insights allow us to prove our main results, namely that there exist local delegation mechanisms that perform no worse and in fact quantitatively better than direct voting in natural graph topologies like complete, random $d$-regular, and bounded degree graphs, lending a more nuanced perspective to previous impossibility results.
Expand
Zhuang Shan, Leyou Zhang, Fuchun Guo, Yong Yu
ePrint Report ePrint Report
We were deeply impressed by the paper by Ateniese et al., published in Crypto 2019. In it, they presented a black-box construction of matchmaking encryption (ME) based on functional encryption. In our work, we propose an ME scheme based on standard assumptions in the standard model. This scheme has been proven to be secure under the learning with error (LWE) assumption. Our ME scheme is achieved through a novel framework of bilateral-policy attribute-based encryption (BP-ABE) and a new intermediate primitive termed a perturbed pseudorandom generator (PPRG), which facilitates the implementation of authentication functionality by replacing non-interactive zero-knowledge proof functionality.

In the scheme presented in this paper, the user's "public key" is generated using Hamming correlation robustness and user attributes. Note that the 'public key' is not public. In order to preserve the privacy of the two parties involved in matchmaking encryption, our BP-ABE scheme does not use the 'public key' directly to encrypt the plaintext. Instead, the message sender selects matching attributes and uses a Hamming correlation robustness and homomorphic pseudorandom function (HPRF) to generate temporary public keys and hide the public key and user attributes.

When these temporary public keys satisfy the access policy, the receiver can decrypt the data using their private key. Regarding the authentication function of matchmaking encryption, this paper proposes a non-interactive privacy set intersection (PSI) scheme based on HPRF and PPRG. The message sender encrypts their 'public key' using the proposed PSI scheme as part of the ciphertext. The receiver also encrypts their 'public key' using the proposed PSI scheme and matches the attributes, thereby completing the message authentication function. We consider our approach to be a significant departure from existing constructions, despite its simplicity.
Expand
Vasyl Ustimenko, Tymoteusz Chojecki
ePrint Report ePrint Report
Let us assume that one of two trusted parties (administrator) manages the information system (IS) and another one (user) is going to use the resources of this IS during the certain time interval. So they need establish secure user’s access password to the IS resources of this system via selected authenticated key exchange protocol. So they need to communicate via insecure communication channel and secretly con-struct a cryptographically strong session key that can serve for the establishment of secure passwords in the form of tuples in certain alphabet during the certain time interval. Nowadays selected protocol has to be postquantum secure. We propose the implementation of this scheme in terms of Symbolic Computa-tions. The key exchange protocol is one of the key exchange algorithms of Noncommutative Cryptography with the platform of multivariate transformation of the affine space over selected finite commutative ring. The session key is a multivariate map on the affine space. Platforms and multivariate maps are construct-ed in terms of Algebraic Graph Theory.
Expand
Stephan Krenn, Thomas Lorünser, Sebastian Ramacher, Federico Valbusa
ePrint Report ePrint Report
As quantum computing matures, its impact on traditional cryptographic protocols becomes increasingly critical, especially for data-at-rest scenarios where large data sets remain encrypted for extended periods of time. This paper addresses the pressing need to transition away from pre-quantum algorithms by presenting an agile cryptosystem that securely and efficiently supports post-quantum Key Encapsulation Mechanisms (KEMs). The proposed solution is based on combining a CCA-secure KEM with a robust Authenticated Encryption scheme, allowing only the dynamic component - the symmetric key encapsulation - to be updated when migrating to new cryptographic algorithms. This approach eliminates the need to re-encrypt potentially massive data payloads, resulting in significant savings in computational overhead and bandwidth. We formalize the concept of cryptoagility through an agile-CCA security model, which requires that neither the original ciphertext nor any updated version reveals meaningful information to an attacker. A game-based proof shows that the overall construction remains agile-CCA secure if the underlying KEM and AE are individually CCA secure under a random oracle assumption. The result is a future-proof scheme that eases the transition to post-quantum standards, enabling enterprises and cloud storage providers to protect large amounts of data with minimal disruption while proactively mitigating emerging quantum threats.
Expand
◄ Previous Next ►