20 February 2025
Wilson Nguyen, Srinath Setty
This paper introduces Neo, a new lattice-based folding scheme for CCS, an NP-complete relation that generalizes R1CS, Plonkish, and AIR. Neo's folding scheme can be viewed as adapting the folding scheme in HyperNova (CRYPTO'24), which assumes elliptic-curve based linearly homomorphic commitments, to the lattice setting. Unlike HyperNova, Neo can use “small” prime fields (e.g., over the Goldilocks prime). Additionally, Neo provides plausible post-quantum security.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is $32\times$ cheaper than committing to a vector of $32$-bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is $32\times$ cheaper than committing to a vector of $32$-bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.
Yevgeniy Dodis, Eli Goldin
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys.
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
Tamar Ben David, Anat Paskin-Cherniavsky
Komargodski et. al. defined evolving secret-sharing schemes with an unbounded number of parties. In this model, parties arrive one after the other and the number of parties that will arrive is not known.
Another cryptographic primitive related to secret-sharing is conditional disclosure of secrets protocols (CDS) that defined by Gertner et. al.
A CDS protocol for a Boolean function $f$ involves $k$ servers and a referee. Each server holds a common secret $s$, a common random string $r$, and a private input $x_i$; using these $r$, each server locally computes one message and sends it to the referee. The referee, knowing the inputs $x_1, \cdots, x_k$ and the messages, should be able to compute $s$ if $f(x_1, \cdots, x_k) = 1$. Otherwise, the referee should not learn information about the secret. In a sense, this is a non-monotone version of secret sharing schemes.
Peter (ISC 23'), defines evolving CDS, implementing in particular evolving predicate $f:\{0,1\}^*\rightarrow\{0,1\}$ (he handles somewhat more general predicates for larger input domains, but generalizing to other input domains is not hard, and we focus on boolean predicates). In this setting, the number of parties is unbounded, where the parties arrive in sequential order. Each party, when arrives, sends one random message to a referee, that depending on its input, the secret, and a common randomness. He also devise evolving CDS protocols for a general evolving predicate via a black-box reduction to evolving secret-sharing scheme for a related access structure. He analyzes this construction for general access structures, as well as other classes of functions, which yields message complexity $O(2^t)$ for the worst predicates.
In this work we provide new upper and lower bounds for evolving CDS. Observing that CDS has the potential for improved efficiency, as it is not restricted to monotone operations, we devise a new evolving general CDS construction. In particular, our construction relies on representing the evolving predicate via an infinite branching program - LINBP, generalizing the monotone infinite branching program based construction of Alon et. al of evolving secret sharing schemes. We obtain nontrivial ($2^{ct-o(t)}$ for $c<1$) message complexity for branching programs of larger width than Alon et al's construction (even when restricting attention to monotone predicates), as well as Peter's construction for certain (but not all) $f$'s.
Indeed, we prove that our construction, as well as Peter's article (ISC 23’) is tight for a certain evolving predicate -- as for evolving secret-sharing, (so called strong) evolving CDS also requires share complexity of $2^{t-o(t)}$. This is unlike the state of affairs for the finite setting, where the best known CDS constructions are much more efficient than the best known secret sharing schemes (for the hardest monotone functions). The latter bound is proved based on an adaptation of Mazor's lower bound (in turns based on Csirmaz's lower bounding technique) to the CDS setting. It relies on a reduction from secret sharing for a certain class of infinite access structures -- the so called partite access structures to evolving CDS for a related (not necessarily monotone) function. Then, a partite evolving access structure is crafted using the Csirmaz-type argument.
Peter (ISC 23'), defines evolving CDS, implementing in particular evolving predicate $f:\{0,1\}^*\rightarrow\{0,1\}$ (he handles somewhat more general predicates for larger input domains, but generalizing to other input domains is not hard, and we focus on boolean predicates). In this setting, the number of parties is unbounded, where the parties arrive in sequential order. Each party, when arrives, sends one random message to a referee, that depending on its input, the secret, and a common randomness. He also devise evolving CDS protocols for a general evolving predicate via a black-box reduction to evolving secret-sharing scheme for a related access structure. He analyzes this construction for general access structures, as well as other classes of functions, which yields message complexity $O(2^t)$ for the worst predicates.
In this work we provide new upper and lower bounds for evolving CDS. Observing that CDS has the potential for improved efficiency, as it is not restricted to monotone operations, we devise a new evolving general CDS construction. In particular, our construction relies on representing the evolving predicate via an infinite branching program - LINBP, generalizing the monotone infinite branching program based construction of Alon et. al of evolving secret sharing schemes. We obtain nontrivial ($2^{ct-o(t)}$ for $c<1$) message complexity for branching programs of larger width than Alon et al's construction (even when restricting attention to monotone predicates), as well as Peter's construction for certain (but not all) $f$'s.
Indeed, we prove that our construction, as well as Peter's article (ISC 23’) is tight for a certain evolving predicate -- as for evolving secret-sharing, (so called strong) evolving CDS also requires share complexity of $2^{t-o(t)}$. This is unlike the state of affairs for the finite setting, where the best known CDS constructions are much more efficient than the best known secret sharing schemes (for the hardest monotone functions). The latter bound is proved based on an adaptation of Mazor's lower bound (in turns based on Csirmaz's lower bounding technique) to the CDS setting. It relies on a reduction from secret sharing for a certain class of infinite access structures -- the so called partite access structures to evolving CDS for a related (not necessarily monotone) function. Then, a partite evolving access structure is crafted using the Csirmaz-type argument.
Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree
We present the first construction for adaptively secure HIBE, that does not rely on bilinear pairings or random oracle heuristics. Notably, we design an adaptively secure HIBE from any selectively secure IBE system in the standard model. Combining this with known results, this gives the first adaptively secure HIBE system from a wide variety of standard assumptions such as CDH/Factoring/LWE/LPN. We also extend our adaptively secure HIBE system to satisfy full anonymity, giving the first adaptively secure anonymous HIBE under CDH/LWE assumption. All our HIBE systems support unbounded length identities as well as unbounded number of recursive delegation operations.
Ky Nguyen, David Pointcheval, Robert Schädlich
Dynamic Decentralized Functional Encryption (DDFE) is a generalization of Functional Encryption which allows multiple users to join the system dynamically without interaction and without relying on a trusted third party. Users can independently encrypt their inputs for a joint evaluation under functions embedded in functional decryption keys; and they keep control on these functions as they all have to contribute to the generation of the functional keys.
In this work, we present new generic compilers which, when instantiated with existing schemes from the literature, improve over the state-of-the-art in terms of security, computational assumptions and functionality. Specifically, we obtain the first adaptively secure DDFE schemes for inner products in both the standard and the stronger function-hiding setting which guarantees privacy not only for messages but also for the evaluated functions. Furthermore, we present the first DDFE for inner products whose security can be proven under the LWE assumption in the standard model. Finally, we give the first construction of a DDFE for the attribute-weighted sums functionality with attribute-based access control (with some limitations). All prior constructions guarantee only selective security, rely on group-based assumptions on pairings, and cannot provide access control.
Sabyasachi Dey, Subhamoy Maitra, Santanu Sarkar, Nitin Kumar Sharma
Over the past decade and a half, cryptanalytic techniques for Salsa20 have been increasingly refined, largely following the overarching concept of Probabilistically Neutral Bits (PNBs) by Aumasson et al. (FSE 2008). In this paper, we present a novel criterion for choosing key-$\mathcal{IV}$ pairs using certain 2-round criteria and connect that with clever tweaks of existing techniques related to Probabilistically Independent $\mathcal{IV}$ bits (earlier used for ARX ciphers, but not for Salsa20) and well-studied PNBs. Through a detailed examination of the matrix after initial rounds of Salsa20, we introduce the first-ever cryptanalysis of Salsa20 exceeding $8$ rounds. Specifically, Salsa20/$8.5$, consisting of $256$ secret key bits, can be cryptanalyzed with a time complexity of $2^{245.84}$ and data amounting to $2^{99.47}$. Further, the sharpness of our attack can be highlighted by showing that Salsa20/$8$ can be broken with time $2^{186.01}$ and data $2^{99.73}$, which is a significant improvement over the best-known result of Coutinho et al. (Journal of Cryptology, 2023, time $2^{217.14}$ and data $2^{113.14}$). Here, the refinements related to backward biases for PNBs are also instrumental in achieving the improvements. We also provide certain instances of how these ideas improve the cryptanalysis on $128$-bit versions. In the process, a few critical points are raised on some existing state-of-the-art works in this direction, and in those cases, their estimates of time and data are revisited to note the correct complexities, revising the incorrect numbers.
David Gerault, Anna Hambitzer, Eyal Ronen, Adi Shamir
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose "bits" are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had $100\%$ success rate in finding $1000$ randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
Clémence Chevignard, Guilhem Mureau, Thomas Espitau, Alice Pellet-Mary, Heorhii Pliatsok, Alexandre Wallet
In this article we present a non-uniform reduction from rank-
2 module-LIP over Complex Multiplication fields, to a variant of the
Principal Ideal Problem, in some fitting quaternion algebra. This reduction
is classical deterministic polynomial-time in the size of the inputs. The
quaternion algebra in which we need to solve the variant of the principal
ideal problem depends on the parameters of the module-LIP problem,
but not on the problem’s instance. Our reduction requires the knowledge
of some special elements of this quaternion algebras, which is why it is
non-uniform.
In some particular cases, these elements can be computed in polynomial
time, making the reduction uniform. This is the case for the Hawk
signature scheme: we show that breaking Hawk is no harder than solving
a variant of the principal ideal problem in a fixed quaternion algebra
(and this reduction is uniform).
Ignacio Cascudo, Anamaria Costache, Daniele Cozzo, Dario Fiore, Antonio Guimarães, Eduardo Soria-Vazquez
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.
We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring $R_q$ without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.
Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring $R_q$. To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over $R_q$, unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
Mohammed Barhoush, Ryo Nishimaki, Takashi Yamakawa
We investigate two natural relaxations of quantum cryptographic assumptions. First, we examine primitives such as pseudorandom generators ($\text{PRG}$s) and pseudorandom states ($\text{PRS}$s), extended with quantum input sampling, which we term $\text{PRG}^{qs}$ and $\text{PRS}^{qs}$. In these primitives, the input is sampled via a quantum algorithm rather than uniformly at random. The second relaxation, $\bot$-pseudodeterminism, allows the generator to output $\bot$ on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between (bounded-query) logarithmic-sized $\text{PRS}^{qs}$, logarithmic-sized $\text{PRS}^{qs}$, and $\text{PRG}^{qs}$. Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that $\text{PRG}^{qs}$ can be constructed from $\bot$-pseudodeterministic $\text{PRG}$s ($\bot\text{-PRG}$s).
To further justify our exploration, we present two separation results. First, we examine the relationship between $\bot$-pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator $(\text{OWSG})$ from a $\bot\text{-PRG}$, indicating that $\bot$-pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a $\bot$-psuedodeterministic $\text{OWSG}$ from a $\text{PRF}^{qs}$, suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of $\text{PRF}^{qs}$s and $\bot\text{-PRG}$s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
We demonstrate an equivalence between (bounded-query) logarithmic-sized $\text{PRS}^{qs}$, logarithmic-sized $\text{PRS}^{qs}$, and $\text{PRG}^{qs}$. Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that $\text{PRG}^{qs}$ can be constructed from $\bot$-pseudodeterministic $\text{PRG}$s ($\bot\text{-PRG}$s).
To further justify our exploration, we present two separation results. First, we examine the relationship between $\bot$-pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator $(\text{OWSG})$ from a $\bot\text{-PRG}$, indicating that $\bot$-pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a $\bot$-psuedodeterministic $\text{OWSG}$ from a $\text{PRF}^{qs}$, suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of $\text{PRF}^{qs}$s and $\bot\text{-PRG}$s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
Ali Dogan, Sermin Kocaman
Decentralized Autonomous Organization operates without a central entity, being owned and governed collectively by its members. In this organization, decisions are carried out automatically through smart contracts for routine tasks, while members vote for unforeseen issues. Scalability in decision-making through voting on proposals is essential to accommodate a growing number of members without sacrificing security. This paper addresses this challenge by introducing a scalable and secure DAO voting system that ensures security through Groth16 zk-SNARKs and exponential ElGamal encryption algorithm while achieving scalability by verifiably delegating heavy computations to untrusted entities. While offline computation on the exponential ElGamal homomorphic encryption algorithm is enabled to reduce the computational cost of the blockchain, Groth16 is allowed to maintain robust off-chain calculation without revealing any further details. Specifically, the Groth16 proof guarantees that (i) the encrypted votes accurately reflect the voter's voting power, ensuring no unauthorized weight manipulation; (ii) only valid non-negative vote values are encrypted, preventing unintended or malicious vote tampering; and (iii) the homomorphic summation is performed correctly. The implementation shows that the proofs are verified remarkably fast, making the S2DV protocol highly suitable for scalable DAO voting, while preserving the security of the election.
Yifan Song, Xiaxi Ye
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of $O(n)$ field elements plus a number of $O(n)$ OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of $O(\ell+\kappa)$ elements per gate in random oracle model achieving malicious security, where $\ell$ denotes the length of a field element and $\kappa$ is the security parameter.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of $O(\ell+\kappa)$ elements per gate in random oracle model achieving malicious security, where $\ell$ denotes the length of a field element and $\kappa$ is the security parameter.
19 February 2025
Universitat Politècnica de Catalunya (UPC)- BarcelonaTECH
Several faculty positions within the Cybersecurity domain are available in Universitat Politecnica de Catalunya - BarcelonaTech (UPC), School of Engineering in Manresa. UPC is a leading research university in Spain with more than 5000 students studying on Information and Communication Technology related fields. It is the top Spanish university in terms of participation in EU-funded research projects.
Apart from conducting research within their area, the successful candidates are expected to teach the recent MSc program on Cybersecurity, AI, and IoT areas, funded through the European Commission’s Digital Europe programme. The development of the MSc program will be part of a European project, in which the candidates will participate actively.
Skills/Qualifications:- PhD degree in the area of Computer Science, Computer Engineering, or a related field
- Written and spoken proficiency in English (Spanish or Catalan is a plus)
- Proven scientific publication record
- Experience in academic teaching (classes and lab courses) and supervising or co-supervising of academic theses
- A friendly working environment
- Hybrid work modality with up to 40% home office option
- Wide range of internal and external training opportunities, various career options
- Nice climate
Closing date for applications:
Contact: Ilker Demirkol ([email protected])
Jules Baudrin, Sonia Belaïd, Nicolas Bon, Christina Boura, Anne Canteaut, Gaëtan Leurent, Pascal Paillier, Léo Perrin, Matthieu Rivain, Yann Rotella, Samuel Tap
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on $\mathbb{F}_{17}$ which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on $\mathbb{F}_{17}$ which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Anasuya Acharya, Karen Azari, Mirza Ahad Baig, Dennis Hofheinz, Chethan Kamath
Garbling is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since the first construction by Yao (FOCS’82, ’86), a line of work has concerned itself with reducing the communication and computational complexity of that construction. One of the most efficient garbling schemes presently is the ‘Half Gates’ scheme by Zahur, Rosulek, and Evans (Eurocrypt’15). Despite its widespread adoption, the provable security of this scheme has been based on assumptions whose only instantiations are in idealized models. For example, in their original paper, Zahur, Rosulek, and Evans showed that hash functions satisfying a notion called circular correlation robustness (CCR) suffice for this task, and then proved that CCR secure hash functions can be instantiated in the random permutation model. In this work, we show how to securely instantiate the Half Gates scheme in the standard model. To this end, we first show how this scheme can be securely instantiated given a (family of) weak CCR hash function, a notion that we introduce. Furthermore, we show how a weak CCR hash function can be used to securely instantiate other efficient garbling schemes, namely the ones by Rosulek and Roy (Crypto’21) and Heath (Eurocrypt’24). Thus we believe this notion to be of independent interest. Finally, we construct such weak CCR hash functions using indistinguishability obfuscation and one-way functions. The security proof of this construction constitutes our main technical contribution. While our construction is not practical, it serves as a proof of concept supporting the soundness of these garbling schemes, which we regard to be particularly important given the recent initiative by NIST to standardize garbling, and the optimizations in Half Gates being potentially adopted.
Bill Allombert, Alice Pellet-Mary, Wessel van Woerden
The rank-$2$ module-LIP problem was introduced in cryptography by (Ducas, Postlethwaite, Pulles, van Woerden, Asiacrypt 2022), to construct the highly performant HAWK scheme. A first cryptanalytic work by (Mureau, Pellet--Mary, Pliatsok, Wallet, Eurocrypt 2024) showed a heuristic polynomial time attack against the rank-$2$ module-LIP problem over totally real number fields. While mathematically interesting, this attack focuses on number fields that are not relevant for cryptography. The main families of fields used in cryptography are the highly predominant cyclotomic fields (used for instance in the HAWK scheme), as well as the NTRU Prime fields, used for instance in the eponymous NTRU Prime scheme (Bernstein, Chuengsatiansup, Lange, van Vredendaal, SAC 2017).
In this work, we generalize the attack of Mureau et al. against rank-$2$ module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of $K$ has a $2$-transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same $2$-transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
In this work, we generalize the attack of Mureau et al. against rank-$2$ module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of $K$ has a $2$-transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same $2$-transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
Dan Boneh, Benedikt Bünz, Kartik Nayak, Lior Rotem, Victor Shoup
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
Sonia Belaïd, Matthieu Rivain, Mélissa Rossi
The random probing model formalizes a leakage scenario where each wire in a circuit leaks with probability $p$. This model holds practical relevance due to its reduction to the noisy leakage model, which is widely regarded as the appropriate formalization for power and electromagnetic side-channel attacks.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
Sara Montanari, Riccardo Longo, Alessio Meneghetti
The secure management of private keys is a fundamental challenge, particularly for the general public, as losing these keys can result in irreversible asset loss. Traditional custodial approaches pose security risks, while decentralized secret sharing schemes offer a more resilient alternative by distributing trust among multiple parties. In this work, we extend an existing decentralized, verifiable, and extensible cryptographic key recovery scheme based on Shamir's secret sharing. We introduce a refresh phase that ensures proactive security, preventing long-term exposure of secret shares. Our approach explores three distinct methods for refreshing shares, analyzing and comparing their security guarantees and computational complexity. Additionally, we extend the protocol to support more complex access structures, with a particular focus on threshold access trees, enabling fine-grained control over key reconstruction.
18 February 2025
Julius Hermelink, Kai-Chun Ning, Richard Petri
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on a physical device, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on a physical device validate our simulations.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on a physical device, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on a physical device validate our simulations.