Linear Secret-shared Shuffle with Malicious Security(2025/2137)
Authors
Samuel Dittmer, Stealth Software Technologies, Inc.
Rohit Nema, Stanford University
Rafail Ostrovsky, University of California, Los Angeles
Abstract
Securely shuffling a secret-shared list is a vital sub-protocol in numerous applications, including secure sorting, secure list merging, secure graph proessing, oblivious RAM, and anonymous broadcast. We demonstrate how to convert the folklore constant-round protocol for secure shuffling, which employs a delegated Fisher-Yates shuffle using rerandomizable encryption, into a maliciously secure constant-round protocol. This gives the first protocol that has a linear end-to-end time for a two-party secret-shared shuffle with malicious security.
We prove the security of our protocol under the ``linear-only'' assumption on the homomorphic encryption system. We also demonstrate that another assumption, namely weak predicability, is sufficient and that it is both weaker than the linear-only assumption and sufficient for security.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
The Latency Cost Of Censorship Resistance(2025/2136)
Authors
Ittai Abraham, Intel Labs
Yuval Efron, Columbia University
Ling Ren, University of Illinois at Urbana–Champaign
Abstract
On the road to eliminating censorship from modern blockchain protocols, recent work in consensus has explored protocol design choices that delegate the duty of block assembly away from a single consensus leader and instead to multiple parties, referred to as includers. As opposed to the traditional leader-based approach, which guarantees transaction inclusion in a block produced by the next correct leader, the multiple includer approach allows blockchain protocols to provide a strong censorship-resistance property for users: A timely submitted transaction is guaranteed to be included in the next confirmed block, regardless of the leader's behavior. Such a guarantee, however, comes at the cost of 2 additional rounds of latency to block confirmation, compared to the leader-based approach. Is this cost necessary?
We introduce the Censorship Resistant Byzantine Broadcast (CRBB) problem, a one-shot variant that distills the core functionality underlying the multiple-includer design paradigm. We then provide a full characterization, both in synchrony and partial synchrony, of the achievable latency of CRBB in executions with a correct leader, which is the most relevant case to practice. Our main result is an inherent latency cost of two additional rounds compared to the classic Byzantine Broadcast (BB) problem. For example, synchronous protocols for CRBB require 4 rounds whenever BB requires 2 rounds. Similarly, up to a small constant in the resilience, partial synchrony protocols for CRBB require 5 rounds whenever BB requires 3 rounds.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Robust Elections and More: Fast MPC in the Preprocessing Model(2025/2135)
Authors
Charanjit S. Jutla, IBM Research - Thomas J. Watson Research Center
Nathan Manohar, IBM Research - Thomas J. Watson Research Center
Arnab Roy, Mysten Labs
Abstract
In this paper, we present an MPC protocol in the preprocessing model with essentially the same concrete online communication and rounds as the state-of-the-art MPC protocols such as online-BGW (with precomputed Beaver tuples) for $t < n/3$ malicious corruptions. However, our protocol additionally guarantees robustness and correctness against up to $t < n/2$ malicious corruptions while the privacy threshold remains at $n/3$. This is particularly useful in settings (e.g. commodity/stock market auctions, national elections, IACR elections) where it is paramount that the correct outcome is certified, while maintaining the best possible online speed. In addition, this honest-majority correctness allows us to use optimistic Berlekamp-Welch decoding in contrast to BGW. Moreover, just like online-BGW, our protocol is responsive until a final attestation phase.
We also give a complementary verifiable input-sharing scheme for the multi-client distributed-server setting which satisfies both robustness and correctness against up to $t < n/2$ malicious servers. This is accomplished by having the servers first run a preprocessing phase that does not involve the clients. The novelty of this input-sharing scheme is that a client only interacts for one round, and hence need not be online, which, again, is highly desirable in applications such as elections/auctions.
We prove our results in the universally-composable model with statistical security against static corruptions. Our protocol is achieved by combining global authenticators of SPDZ with an augmented Reed-Solomon code in a novel manner. This augmented code enables honest-majority decoding of degree $n/2$ Reed-Solomon codes. Our particular augmentation (often referred to as robust sharing) has the additional property that the preprocessing phase can generate this augmented sharing with a factor $n$ speedup over prior information-theoretic robust sharing schemes.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Non-Interactive Threshold Mercurial Signatures with Applications to Threshold DAC(2025/2134)
Authors
Scott Griffy, Brown University
Nicholas Jankovic, Brown University
Anna Lysyanskaya, Brown University
Arup Mondal, Ashoka University
Abstract
In a mercurial signature, a signer signs a representative $m$ of an equivalence class of messages on behalf of a representative $\mathsf{pk}$ of an equivalence class of public keys, receiving the signature $\sigma$. One can then transform $\sigma$ into a signature $\sigma'$ on an equivalent (to $m$) message $m'$ under an equivalent (to $\mathsf{pk}$) public key $\mathsf{pk}'$. Mercurial signatures are helpful in constructing delegatable anonymous credentials: their privacy properties enable straightforward randomization of a credential chain, hiding the identity of each signer while preserving the authenticity of the overall credential.
Unfortunately, without trusted setup, known constructions of mercurial signatures satisfy only a weak form of this privacy property. Specifically, an adversary who is responsible for a link in a delegation chain—and thus knows its corresponding secret key—will be able to recognize this link even after the chain has been randomized.
To address this issue, Abe et al. (Asiacrypt 2024) proposed (interactive) threshold mercurial signatures (TMS), which remove the reliance on a single trusted signer by distributing the signing capability among multiple parties, none of whom knows the signing key. However, this contribution was far from practical, as it required the signers to interact with each other during the signing process.
In this work, we define and realize non-interactive TMS, where each participant non-interactively computes its contribution to the threshold mercurial signature. Our construction also substantially reduces the overall communication complexity. It uses the mercurial signature scheme of Mir et al. (CCS 2023) as a starting point. Further, we introduce threshold delegatable anonymous credentials (TDAC) and use a non-interactive TMS to construct them.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Byzantine Broadcast with Unknown Participants(2025/2133)
Authors
Wonseok Choi, DGIST
Ran Cohen, Reichman University
Juan Garay, Texas A&M University
Nikos Skoumios, University of Washington
Vassilis Zikas, Georgia Institute of Technology
Abstract
A sender wishes to consistently broadcast a message on the dark web, so that whoever is around and active will agree on it even when the sender is malicious. No assumptions on the number of honest parties, or blockchain-style ``tricks''---like balanced resource-allocation (e.g., hashing power or stake ownership)---can be made.
The above is an instance of Byzantine broadcast (BB) in the unknown-participants setting (``UP Broadcast'' for short). Despite four decades of extensive research on dishonest-majority BB, all existing approaches (e.g., the well-known Dolev-Strong protocol) fail to solve this problem, as they crucially rely on knowing the number of protocol participants---or the make blockchain-style assumptions on available resources. The challenge, which might appear as an inherent limitation, is that without any such assumption malicious parties can join the protocol at any point during its execution, making it arduous for other parties to terminate without violating consistency. So one might wonder: Is this even possible?
In this work, we provide the first definitions of UP Broadcast that incorporate both static and dynamic participation and corruption of arbitrary many parties. Interestingly, even formally defining the problem turns out to be non-trivial as one needs to deviate from the model used in classical BB approaches. We then provide the strongest possible (and in our opinion, unexpected) answer to the above question: Yes, it is! We provide a polynomial-time deterministic UP Broadcast protocol. In the process we also solve UP Interactive Consistency, which corresponds to the multi-sender version of the problem. Our constructions are in the standard, synchronous model of protocol execution, and they offer consistency and validity guarantees to every party who is present throughout the protocol execution.
We next turn to the question of round complexity and prove that our protocols are optimal against adversaries who can corrupt arbitrarily many parties; this optimality applies even to randomized protocols. Finally, we ask, what if parties join in the middle of the protocol execution? We provide a negative result for unrestricted dynamic participation; on the positive side, we devise definitions that offer best-possible guarantees (also to such ``late'' parties), and present corresponding constructions that remain round-optimal.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Tjitske Ollie Koster, Delft University of Technology
Francesca Falzon, ETH Zurich
Evangelia Anna Markatou, Delft University of Technology
Abstract
Recent attacks on private set intersection (PSI) and PSI-like protocols have demonstrated that input privacy can be compromised when parties maliciously choose their inputs, even in protocols proven secure against malicious adversaries. To counter such attacks, Authorized PSI (APSI) introduces a judge who authorizes the elements of the parties before the intersection is computed.
Falzon and Markatou (PETS 2025) proposed Partial-APSI, a privacy-preserving variant of APSI that prevents revealing the entire set to a judge. Their Partial-APSI protocol requires significant bandwidth overhead due to the use of bilinear pairings and because the judge must sign each element in the input set. In this work, we present a bandwidth-efficient Partial-APSI protocol that outperforms Falzon and Markatou, both asymptotically and empirically. For example, for sets of size $2^{20}$, we require around $21\times$ less bandwidth and are about $6\times$ faster over a LAN network.
In addition to our protocol, we model the real-world behavior of rational parties through a game-theoretic analysis.
We introduce payout mechanisms for detected cheating and establish lower bounds on their values, ensuring that the best strategy for rational parties is to provide honest input.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Persistent BitTorrent Trackers(2025/2131)
Authors
Francois Xavier Wicht, University of Bern
Zhengwei Tong, Duke University
Shunfan Zhou, Phala Network
Hang Yin, Phala Network
Aviv Yaish, Yale University , IC3
Abstract
Private BitTorrent trackers enforce upload-to-download ratios to prevent free-riding, but suffer from three critical weaknesses: reputation cannot move between trackers, centralized servers create single points of failure, and upload statistics are self-reported and unverifiable. When a tracker shuts down (whether by operator choice, technical failure, or legal action) users lose their contribution history and cannot prove their standing to new communities. We address these problems by storing reputation in smart contracts and replacing self-reports with cryptographic attestations. Receiving peers sign receipts for transferred pieces, which the tracker aggregates and verifies before updating on-chain reputation. Trackers run in Trusted Execution Environments (TEEs) to guarantee correct aggregation and prevent manipulation of state. If a tracker is unavailable, peers use an authenticated Distributed Hash Table (DHT) for discovery: the on-chain reputation acts as a Public Key Infrastructure (PKI), so peers can verify each other and maintain access control without the tracker. This design persists reputation across tracker failures and makes it portable to new instances through single-hop migration in factory-deployed contracts. We formalize the security requirements, prove correctness under standard cryptographic assumptions, and evaluate a prototype on Intel TDX. Measurements show that transfer receipts adds less than 6\% overhead with typical piece sizes, and signature aggregation speeds up verification by $2.5\times$.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Weightwise (almost) perfectly balanced functions: $t$-concatenation and the general Maiorana-McFarland class(2025/2130)
Authors
Leyla Işık, University of Primorska
René Rodríguez-Aldama, University of Primorska
Ajla Šehović, University of Primorska
Abstract
The study of cryptographic criteria for Boolean functions with restricted domains has been an important topic over the last 20 years. A revived interest has sparked after the work of Carlet, Méaux and Rotella in 2017, where the authors studied cryptographic properties of restricted-domain functions and introduced the concept of weightwise perfectly balanced functions as part of the analysis of the FLIP stream cipher. Weightwise (almost) perfectly balanced functions are defined as Boolean functions that are (almost) balanced on each of the sets of vectors of the same Hamming weight. Several approaches have been considered to build new families of such functions. In this article, we present some new constructions of weightwise (almost) perfectly balanced functions via two approaches, the first class is constructed using the $t$-concatenation of Boolean functions, whereas the second one draws certain functions from the so-called general Maiorana-McFarland class. A generic analysis of these two classes is given, as well as explicit examples in both classes. Namely, we provide instances of functions in both classes attaining high overall nonlinearities, as well as slice nonlinearities. Notably, we present examples in 16 variables that attain some of the best overall nonlinearities, and more importantly, the highest slice nonlinearities among all of the constructions presented in the literature.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Binding Security of Explicitly-Rejecting KEMs via Plaintext Confirmation and Robust PKEs(2025/2129)
Authors
Juliane Krämer, University of Regensburg
Yannick Münz, University of Konstanz
Patrick Struck, University of Konstanz
Maximiliane Weishäupl, University of Regensburg
Abstract
We analyse the binding properties of explicitly-rejecting key-encapsulation mechanisms (KEMs) obtained by the Fujisaki-Okamoto (FO) transform. The framework for binding notions, introduced by [CDM24], generalises robustness and collision-freeness, and was motivated by the discovery of new types of attacks against KEMs. Implicitly-rejecting FO-KEMs have already been analysed with regards to the binding notions, with [KSW25b] providing the full picture. Binding notions for explicitly-rejecting FO-KEMs have been examined only partially, leaving several gaps. Moreover, the analysis of the explicit-rejection setting must account for additional binding notions that implicitly-rejecting KEMs cannot satisfy. We give mostly positive results for the explicitly-rejecting FO transform—though many notions require further robustness assumptions on the underlying PKE. We then show that the explicit FO transform with plaintext confirmation hash (HFO) achieves all notions and requires weaker robustness assumptions. Finally, we introduce a slightly modified version of the HFO transform that achieves all binding notions without requiring any robustness of the underlying PKE.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Refined Linear Approximations for ARX Ciphers and Their Application to ChaCha(2025/2128)
Authors
Yurie Okada, The University of Osaka
Atsuki Nagai, KDDI (Japan)
Atsuko Miyaji, The University of Osaka
Abstract
ARX-based ciphers such as Salsa20 and ChaCha achieve high performance using only modular addition, rotation, and XOR.
While ARX constructions are widely deployed in practice,
linear and differential-linear cryptanalysis often reveal non-negligible biases in their reduced-round variants.
Previous work has shown that a 7-round distinguisher on ChaCha is feasible, requiring about \(2^{214}\) operations and relying on a linear approximation with a theoretical bias of \(2^{-53}\).
However, such theoretical approximations significantly deviate from experimental observations.
In this work, we resolve these discrepancies by introducing
new fundamental linear approximations for two consecutive additions over three independent variables.
We rigorously derive the exact probabilities of these approximations, demonstrating that the conventional independence assumption leads to systematic errors in bias estimation.
Applying our theorem to ChaCha, we refine the probabilities of key approximations used in previous attacks.
Our refined estimates closely match experimentally observed biases, reducing the gap between theory and practice.
These results provide a more accurate foundation for future differential-linear cryptanalysis of ChaCha and other ARX-based designs.(show hidden)
Publication status: Published elsewhere. Minor revision. INDOCRYPT 2025First uploaded: , last revision:
Censorship-Resistant Sealed-Bid Auctions on Blockchains(2025/2127)
Authors
Orestis Alpos, Common Prefix
Lioba Heimbach, Category Labs
Kartik Nayak, Duke University
Sarisht Wadhwa, Duke University
Abstract
Traditional commit-and-reveal mechanisms have been used to realize sealed-bid on-chain auctions. However, these leak timing information, impose inefficient participation costs -- the inclusion fee to be paid for adding the transaction on-chain -- and also require multiple slots to execute the auction. Recent research investigates single-slot auctions; however, it requires a high threshold of honest parties.
We present a protocol that addresses these issues. Our design combines timestamp-based certificates with censorship resistance through inclusion lists. The resulting protocol satisfies four properties, the first being a strong hiding property which consists of Value Indistinguishability, Existential Obfuscation and User Obfuscation. This not only ensures that the adversary cannot differentiate between two value of bids (as the previously defined Hiding property does in Pranav et al. [MCP]), but also that the very existence of a bid and the identity of the bidder remain obfuscated. The second property is Short-Term Censorship Resistance, ensuring that, if the underlying blockchain outputs a block, then the auction would contain bids from all honest users. The third is a new property we introduce, Auction Participation Efficiency (APE), that measures how closely on-chain outcomes resemble classical auctions in terms of costs for participating users. And the fourth property is No Free Bid Withdrawal, which disallows committed bids from being withdrawn in case the bidder changes its mind.
Together, these properties yield a fair, private, and economically robust auction primitive that can be integrated into any blockchain to support secure and efficient auction execution.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
DPaaS: Improving Decentralization by Removing Relays in Ethereum PBS(2025/2126)
Authors
Chenyang Liu, Duke University
Ittai Abraham, Intel Labs
Matthew Lentz, Duke University
Kartik Nayak, Duke University
Abstract
Proposer-Builder Separation (PBS) in Ethereum improves decentralization and scalability by offloading block construction to specialized builders. In practice, MEV-Boost implements PBS via a side-car protocol with trusted relays between proposers and builders, resulting in increased centralization as well as security (e.g., block stealing) and performance concerns. We propose Decentralized Proposer-as-a-Service (DPaaS), a deployable architecture that eliminates centralized relays while preserving backward compatibility with Ethereum’s existing consensus layer. Our insight is that we can reduce centralized trust by distributing the combined roles of the proposer and relay to a set of Proposer Entities (PEs), each running in independent Trusted Execution Environments (TEEs). For compatibility, DPaaS presents itself to Ethereum as a single validator, leveraging threshold and aggregation properties of the BLS signature scheme used in Ethereum. At the same time, DPaaS protocols ensure fair exchange between builders and proposers even in the face of a small fraction of TEE failures or partial synchrony in networks. Our evaluation, deployed across four independent cloud hosts and driven by real-world traces, shows that DPaaS achieves less than 5 ms bid processing latency and 55.75 ms latency from the end of auction to block proposal -- demonstrating that DPaaS can offer security and decentralization benefits while providing strong performance.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Are ideal functionalities really ideal?(2025/2125)
Authors
Myrto Arapinis, University of Edinburgh
Véronique Cortier, Centre National de la Recherche Scientifique
Hubert de Groote, Université Paris Saclay
Charlie Jacomme, French Institute for Research in Computer Science and Automation
Steve Kremer, French Institute for Research in Computer Science and Automation
Abstract
Ideal functionalities are used to study increasingly complex protocols within the Universal Composability framework. However, such functionalities are often complex themselves, making it difficult to assess whether they truly fulfill their promises. In this paper, we present four attacks on functionalities from various applications (e-voting, SMPC, anonymous lotteries, and smart metering), demonstrating that they do not capture the intuitively expected properties.
We argue that ideal functionalities should not merely be justified secure at a high level but rigorously proven to be so. To this end, we propose a methodology that combines game-based proofs and computer-aided verification: ideal functionalities can in fact be treated as protocols, and one can use traditional game-based proofs to study them, where any game-based security property proven on the functionality does transfer to any protocol that realizes it. We also propose fixed versions of the ideal functionalities we studied, and formally define the security properties they should satisfy through a game. Finally, using Squirrel, a proof assistant for protocol security, we formally prove that the fixed functionalities verify the specified game-based security properties.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
SALSAA – Sumcheck-Aided Lattice-based Succinct Arguments and Applications(2025/2124)
Authors
Shuto Kuriyama, Aalto University, Espoo, Finland
Russell W. F. Lai, Aalto University, Espoo, Finland
Michał Osadnik, Aalto University, Espoo, Finland
Lorenzo Tucci, Aalto University, Espoo, Finland
Abstract
We present SALSAA, a more efficient and more versatile extension of the state-of-the-art lattice-based fully-succinct argument frameworks, ``RoK, paper, SISsors (RPS)'' and ``RoK and Roll (RnR)'' [Klooß, Lai, Nguyen, and Osadnik; ASIACRYPT'24, '25], integrating the sumcheck technique as a main component. This integration enables us to design an efficient norm-check protocol (controlling the norm during witness extraction) with a strictly linear-time prover while reducing proof sizes by 2-3$\times$ compared to the previous quasi-linear-time norm-check in RPS/RnR, eliminating a central performance bottleneck.
The sumcheck integration also allows us to natively support a wider class of relations, including rank-1 constraint systems (R1CS), which are widely used to express real-world computations.
To demonstrate the versatility and efficiency of our framework, we showcase three impactful applications achieved by different RoKs (Reductions of Knowledge) compositions:
(i) a lattice-based succinct argument of knowledge with a linear-time prover, achieving a verifier time of $41$ ms, prover runtime of $10.61$ s, and proof size of $979$ KB for a witness of $2^{28}$ $\mathbb{Z}_q$ elements;
(ii) a polynomial commitment scheme with matching performance; and
(iii) the first lattice-based folding scheme natively operating on $\ell_2$-norm-bounded witnesses, achieving highly efficient verification in $2.28$ ms and producing a proof of just $73$ KB for a witness of $2^{28}$ $\mathbf{Z}_q$ elements, outperforming prior works for the family of linear relations.
We provide a modular, concretely efficient Rust implementation of our framework, benchmarked over cyclotomic rings with AVX-512-accelerated NTT-based arithmetic, demonstrating the practical efficiency of our approach.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Dictators? Friends? Forgers. Breaking and Fixing Unforgeability Definitions for Anamorphic Signature Schemes(2025/2123)
Authors
Joseph Jaeger, Georgia Institute of Technology
Roy Stracovsky, Georgia Institute of Technology
Abstract
Anamorphic signature schemes (KPPYZ, Crypto 2023) allow users to hide encrypted messages in signatures to allow covert communication in a hypothesized scenario where encryption is outlawed by a "dictator" but authentication is permitted. We enhance the security of anamorphic signatures by proposing two parallel notions of unforgeability which close gaps in existing security definitions. The first notion considers a dictator who wishes to forge anamorphic signatures. This notion patches a divide between the definition and a stated security goal of robustness (BGHMR, Eurocrypt 2024). We port two related BGHMR constructions to the signature scheme setting and demonstrate that, as presented, both of these and a construction from KPPYZ are insecure under an active dictator. However, two of the three can easily be modified to satisfy our definition. The second notion we propose considers a recipient who wishes to forge signatures. To motivate this notion, we identify a gap in an existing security definition from KPPYZ and present attacks that allow parties to be impersonated when using schemes erroneously deemed secure. We then formalize our new unforgeability definition to close this gap. Interestingly, while the new definition is only modestly different from the old one, the change introduces subtle technical challenges that arise when proving security. We overcome these challenges in our reanalysis of existing anamorphic signature schemes by showing they achieve our new notion when built from chosen-randomness secure signatures or with encryption that satisfies a novel ideal-model simulatability property.(show hidden)
Publication status: A major revision of an IACR publication in ASIACRYPT 2024First uploaded: , last revision:
Adaptive Security for Constrained PRFs(2025/2122)
Authors
Kaishuo Cheng, Georgia Institute of Technology
Joseph Jaeger, Georgia Institute of Technology
Abstract
There is a gap between the security of constrained PRFs required in some applications and the security provided by existing definitions. This gap is typically patched by only considering nonadaptive security or manually mixing the CPRF with a random oracle (implicitly constructing a new CPRF) to achieve adaptive security. We fill this gap with a new definition for constrained PRFs with strong adaptive security properties and proofs that it is achieved by practical constructions based on the cascade PRF (which generalizes the GGM construction) and AMAC. We apply the definition for analyzing searchable symmetric encryption and puncturable key wrapping.(show hidden)
Publication status: A major revision of an IACR publication in CRYPTO 2025First uploaded: , last revision:
Generic and Algebraic Computation Models: When AGM Proofs Transfer to the GGM(2025/2121)
Authors
Joseph Jaeger, Georgia Institute of Technology
Deep Inder Mohan, Georgia Institute of Technology
Abstract
The Fuchsbauer, Kiltz, and Loss (CRYPTO 2018) claim that (some) hardness results in the algebraic group model imply the same hardness results in the generic group model was recently called into question by Katz, Zhang, and Zhou (ASIACRYPT 2022). The latter gave an interpretation of the claim under which it is incorrect. We give an alternate interpretation under which it is correct, using natural frameworks for capturing generic and algebraic models for arbitrary algebraic structures. Most algebraic analyses in the literature can be captured by our frameworks, making the claim correct for them.(show hidden)
Publication status: Published elsewhere. Major revision. CRYPTO 2024First uploaded: , last revision:
Language-Agnostic Detection of Computation-Constraint Inconsistencies in ZKP Programs via Value Inference(2025/2120)
Authors
Arman Kolozyan, Vrije Universiteit Brussel
Bram Vandenbogaerde, Vrije Universiteit Brussel
Janwillem Swalens, Nokia Bell Labs
Lode Hoste, Nokia Bell Labs
Stefanos Chaliasos, UCL Centre for Blockchain Technologies , zkSecurity
Coen De Roover, Vrije Universiteit Brussel
Abstract
Zero-knowledge proofs (ZKPs) allow a prover to convince a verifier of a statement's truth without revealing any other information. In recent years, ZKPs have matured into a practical technology underpinning major applications. However, implementing ZKP programs remains challenging, as they operate over arithmetic circuits that encode the logic of both the prover and the verifier. Therefore, developers must not only express the computations for generating proofs, but also explicitly specify the constraints for verification. As recent studies have shown, this decoupling may lead to critical ZKP-specific vulnerabilities.
Unfortunately, existing tools for detecting them are limited, as they:
(1) are tightly coupled to specific ZKP languages,
(2) are confined to the constraint level, preventing reasoning about the underlying computations,
(3) target only a narrow class of bugs,
and (4) suffer from scalability bottlenecks due to reliance on SMT solvers.
To address these limitations, we propose a language-agnostic formal model, called the Domain Consistency Model (DCM), which captures the relationship between computations and constraints. Using this model, we provide a taxonomy of vulnerabilities based on computation-constraint mismatches, including novel subclasses overlooked by existing models. Next, we implement a lightweight automated bug detection tool, called CCC-Check, which is based on abstract interpretation. We evaluate CCC-Check on a dataset of 20 benchmark programs. Compared to the SoTA verification tool CIVER, our tool achieves a 100-1000$\times$ speedup, while maintaining a low false positive rate. Finally, using the DCM, we examine six widely adopted ZKP projects and uncover 15 previously unknown vulnerabilities. We reported these bugs to the projects' maintainers, 13 of which have since been patched. Of these 15 vulnerabilities, 12 could not be captured by existing models.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Twinkle: A family of Low-latency Schemes for Authenticated Encryption and Pointer Authentication(2025/2119)
Authors
Jianhua Wang, Shield Lab, Huawei Technologies Co., Ltd., China
Tao Huang, Shield Lab, Huawei International Pte. Ltd., Singapore
Shuang Wu, Shield Lab, Huawei International Pte. Ltd., Singapore
Zilong Liu, HiSilicon Technologies Co. Ltd., China
Abstract
In this paper, we aim to explore the design of low-latency authenticated encryption schemes particularly for memory encryption, with a focus on the temporal uniqueness property. To achieve this, we present the low-latency Pseudo-Random Function (PRF) called $\mathtt{Twinkle}$ with an output up to 1152 bits. Leveraging only one block of $\texttt{Twinkle}$, we developed $\texttt{Twinkle-AE}$, a specialized authenticated encryption scheme with six variants covering different cache line sizes and security requirements. We also propose $\texttt{Twinkle-PA}$, a pointer authentication algorithm, which takes a 64-bit pointer and 64-bit context as input and outputs a tag of 1 to 32 bits.
We conducted thorough security evaluations of both the PRFs and these schemes, examining their robustness against various common attacks. The results of our cryptanalysis indicate that these designs successfully achieve their targeted security objectives.
Hardware implementations using the FreePDK45nm library show that $\texttt{Twinkle-AE}$ achieves an encryption and authentication latency of 3.83 $ns$ for a cache line. In comparison, $\texttt{AES}$-CTR with WC-MAC scheme and Ascon-128a achieve latencies of 9.78 $ns$ and 27.30 $ns$, respectively.
For the pointer authentication scheme $\texttt{Twinkle-PA}$, the latency is 2.04 $ns$, while $\texttt{QARMA-64-}\sigma_0$ has a latency of 5.57 $ns$.(show hidden)
Publication status: A major revision of an IACR publication in CIC 2024First uploaded: , last revision:
Shunya Otomo, Institute of Science Tokyo
Kenji Yasunaga, Institute of Science Tokyo
Abstract
A recent study by Yamashita and Yasunaga (GameSec 2023) presented a constant-round deterministic broadcast protocol secure against \emph{detection-averse} adversaries ---
those who prefer to attack without being detected. In this work, we revisit their protocol and observe that it remains secure even against a broader class of adversaries, not necessarily detection-averse. We formalize its detection mechanism as \emph{local detectability} and construct broadcast protocols with local detectability that address two weaknesses of the original protocol: (1) it only guarantees weak validity, and (2) it may cause false detections.
Our first protocol achieves round complexity four against rational adversaries and $t+4$ against malicious adversaries, where the adversary corrupts at most $t$ parties. Our second protocol achieves the optimal round complexity of $t+1$ for malicious adversaries, while the round complexity is four against detection-averse adversaries.(show hidden)
Publication status: Published elsewhere. CANS 2025First uploaded: , last revision:
Revisiting Simulation Extractability in the Updatable Setting(2025/2117)
Authors
Hamidreza Khoshakhlagh, Aarhus University , Partisia
Abstract
We revisit the notion of Simulation Extractability (SE) for SNARKs in the updatable setting. We demonstrate that existing formal definitions of SE in this setting are insufficient to guarantee the required non-malleability in real-world scenarios.
Towards this, we first identify and frame a malleability vulnerability: a cross-SRS reinterpretation attack, which shows that an adversary can reuse or maul proofs across different, correlated SRSs generated through the update procedure. This is made possible because existing security definitions fail to model an adversary’s ability to observe simulated proofs relative to various derived SRSs.
To close this security gap, we propose a revised and stronger security notion of Updatable Simulation Extractability (USE) which was originally defined in [GKK+22]. Our definition models a dynamic environment where the SRS is adaptively updatable by the adversary, who can also query simulation oracles for proofs under the resulting family of reachable SRSs. This captures the full extent of the adversarial capabilities observed in practice.
Finally, we provide positive results for popular polynomial-IOP-based SNARKs, and show that these schemes satisfy our stronger USE notion, provided the circuit-specific SRS is securely bound into the proof transcript, e.g., via a correct implementation of the Fiat-Shamir transformation.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Oblivious Batch Updates for Bloom-Filter-based Outsourced Cryptographic Protocols(2025/2116)
Authors
Marten van Dijk, Centrum Wiskunde & Informatica
Dandan Yuan, Centrum Wiskunde & Informatica
Abstract
In this work, we initiate the formal study of oblivious batch updates over outsourced encrypted Bloom filters, focusing on scenarios where a storage-limited sender must insert or delete batches of elements in a Bloom filter maintained on an untrusted server. Our survey identifies only two prior approaches (CCS 2008 and CCS 2012) that can be adapted to this problem. However, they either fail to provide adequate security in dynamic scenarios or incur prohibitive update costs that scale with the filter’s maximum capacity rather than the actual batch size.
To address these limitations, we introduce a new cryptographic primitive, $\textit{Oblivious Bloom Filter Insertion}$ ($\textsf{OBFI}$), and propose novel constructions. At the core of our design is a novel building block, $\textit{Oblivious Bucket Distribution}$ ($\textsf{OBD}$), which enables a storage-limited sender to distribute a large array of elements, uniformly sampled from a finite domain, into small, fixed-size buckets in a data-oblivious manner determined by element order. The design of $\textsf{OBD}$ is further supported by identifying and proving a new structural property of such arrays, which establishes tight and explicit probabilistic bounds on the number of elements falling within predefined subranges of the domain.
Our $\textsf{OBFI}$ constructions achieve adaptive data-obliviousness and ensure that batch update costs scale primarily with the batch size. Depending on the variant, the sender’s storage requirement ranges from $O(\lambda)$, where $\lambda$ is the security parameter, down to $O(1)$. Finally, we demonstrate the practicality of $\textsf{OBFI}$ by integrating it into representative Bloom-filter-based cryptographic protocols for Searchable Symmetric Encryption, Public-key Encryption with Keyword Search, and Outsourced Private Set Intersection, thereby obtaining batch-updatable counterparts with state-of-the-art security and performance.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Weighted Batched Threshold Encryption with Applications to Mempool Privacy(2025/2115)
Authors
Amit Agarwal, Category Labs
Kushal Babel, Category Labs
Sourav Das, Category Labs
Babak Poorebrahim Gilkalaye, Category Labs
Arup Mondal, Ashoka University
Benny Pinkas, Bar-Ilan University
Peter Rindal, Category Labs
Aayush Yadav, George Mason University
Abstract
A Batched Threshold Encryption (BTE) scheme enables a committee of servers to perform a lightweight (in terms of communication and computation) threshold decryption of an arbitrary batch of ciphertexts from a larger pool, while ensuring the privacy of ciphertexts that are outside the batch. Such a primitive has a direct application in designing encrypted mempools for MEV protection in modern blockchains. Bormet et al. (USENIX 2025) recently proposed a BTE scheme called “BEAT-MEV” which is concretely efficient for small to moderate batch sizes.
In this work, we improve and extend the BEAT-MEV scheme in multiple ways. First, we improve the computational cost from quadratic to quasilinear in the batch size, thus making it practical for large batch sizes. This improvement is achieved by substituting the key-homomorphic punctured PRF used in BEAT-MEV with an FFT-friendly alternative. Second, we extend the ideas in their scheme to the weighted setting, where each server in the committee has an associated 'weight' value (e.g., stake weight of validators in PoS blockchains), while crucially ensuring that the communication cost remains independent of the weights. In contrast, BEAT-MEV with naive virtualization would incur communication cost linear in the total weight. Third, for handling the small failure rate inherent in BEAT-MEV scheme due to index collisions across different clients at the time of encryption, we propose a generalization of their suggested approach which offers an option to trade off between ciphertext size and server communication for a given failure rate.
We implement and evaluate our scheme and compare it with BEAT-MEV to demonstrate our concrete improvement. In the unweighted setting, we improve the computational cost (without increasing the communication cost) by ≈ 6× for a batch size of 512 ciphertexts. In the weighted setting, we improve the communication cost (without compromising computation time), over BEAT-MEV with naive virtualization, by ≈ 50× for 100 validators with total stake weight 5000 distributed as per the latest Solana stake distribution.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Hardness of Range Avoidance and Proof Complexity Generators from Demi-Bits(2025/2114)
Authors
Hanlin Ren, Institute for Advanced Study
Yichuan Wang, University of California, Berkeley
Yan Zhong, Johns Hopkins University
Abstract
Given a circuit $G: \{0, 1\}^n \to \{0, 1\}^m$ with $m > n$, the *range avoidance* problem ($\text{Avoid}$) asks to output a string $y\in \{0, 1\}^m$ that is not in the range of $G$. Besides its profound connection to circuit complexity and explicit construction problems, this problem is also related to the existence of *proof complexity generators* --- circuits $G: \{0, 1\}^n \to \{0, 1\}^m$ where $m > n$ but for every $y\in \{0, 1\}^m$, it is infeasible to prove the statement "$y\not\in\mathrm{Range}(G)$" in a given propositional proof system.
This paper connects these two problems with the existence of *demi-bits generators*, a fundamental cryptographic primitive against nondeterministic adversaries introduced by Rudich (RANDOM '97).
$\bullet$ We show that the existence of demi-bits generators implies $\text{Avoid}$ is hard for nondeterministic algorithms. This resolves an open problem raised by Chen and Li (STOC '24). Furthermore, assuming the demi-hardness of certain LPN-style generators or Goldreich's PRG, we prove the hardness of $\text{Avoid}$ even when the instances are constant-degree polynomials over $\mathbb{F}_2$.
$\bullet$ We show that the dual weak pigeonhole principle is unprovable in Cook's theory $\mathsf{PV}_1$ under the existence of demi-bits generators secure against $\mathbf{AM}/_{O(1)}$, thereby separating Jeřábek's theory $\mathsf{APC}_1$ from $\mathsf{PV}_1$. Previously, Ilango, Li, and Williams (STOC '23) obtained the same separation under different (and arguably stronger) cryptographic assumptions.
$\bullet$ We transform demi-bits generators to proof complexity generators that are *pseudo-surjective* in certain parameter regime. Pseudo-surjectivity is the strongest form of hardness considered in the literature for proof complexity generators.
Our constructions are inspired by the recent breakthroughs on the hardness of $\text{Avoid}$ by Ilango, Li, and Williams (STOC '23) and Chen and Li (STOC '24). We use *randomness extractors* to significantly simplify the construction and the proof.(show hidden)
Publication status: Published elsewhere. Minor revision. ITCS 2026First uploaded: , last revision:
Single-Server Private Outsourcing of zk-SNARKs(2025/2113)
Authors
Kasra Abbaszadeh, University of Maryland
Hossein Hafezi, New York University
Jonathan Katz, Google (United States)
Sarah Meiklejohn, Google (United States) , University College London
Abstract
Succinct zero-knowledge arguments (zk-SNARKs) enable a prover to convince a verifier of the truth of a statement via a succinct and efficiently verifiable proof without revealing any additional information about the secret witness. A barrier to practical deployment of zk-SNARKs is their high proving cost. With this motivation, we study server-aided zk-SNARKs, where a client/prover outsources most of its work to a single, untrusted server while the server learns nothing about the witness or even the proof. We formalize this notion and show how to realize server-aided proving for widely deployed zk-SNARKs, including Nova, Groth16, and Plonk.
The key building block underlying our designs is a new primitive, encrypted multi-scalar multiplication (EMSM), that enables private delegation of multi-scalar multiplications (MSMs). We construct an EMSM from variants of the learning parity with noise assumption in which the client does $O(1)$ group operations, while the server’s work matches that of the plaintext MSM.
We implement and evaluate our constructions. Compared to local proving, our techniques lower the client's computation by up to $20\times$ and reduce the proving latency by up to $9\times$.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Sharing the Mask: TFHE bootstrapping on Packed Messages(2025/2112)
Authors
Bergerat Loris, Zama, Paris, France , Université Caen Normandie, ENSICAEN, CNRS, Normandie Univ, GREYC UMR 6072, F-14000 Caen, France
Bonte Charlotte, Zama, Paris, France
Benjamin R. Curtis, Zama, Paris, France
Jean-Baptiste Orfila, Zama, Paris, France
Pascal Paillier, Zama, Paris, France
Samuel Tap, Zama, Paris, France
Abstract
Fully Homomorphic Encryption (FHE) schemes typically experience significant data expansion during encryption, leading to increased computational costs and memory demands during homomorphic evaluations compared to their plaintext counterparts. This work builds upon prior methods aimed at reducing ciphertext expansion by leveraging matrix secrets under the Matrix-LWE assumption. In particular, we consider a ciphertext format referred to in this work as common mask (CM) ciphertexts, which comprises a shared mask and multiple message bodies. Each body encrypts a distinct message while reusing the common random mask. We demonstrate that all known FHEW/TFHE style ciphertext variants and operations can be naturally extended to this CM format. Our benchmarks highlight the potential for amortizing operations using the CM structure, significantly reducing overhead. For instance, in the boolean setting, we have up to a 51% improvement when packing 8 messages. Beyond ciphertext compression and amortized evaluations, the CM format also enables the generalization of several core-TFHE operations. Specifically, we support applying distinct lookup tables on different encrypted messages within a single CM ciphertext and private linear operations on messages encrypted within the same CM ciphertext.(show hidden)
Publication status: Published elsewhere. TCHES 2025First uploaded: , last revision:
SoK: Secure Computation over Secret Shares(2025/2111)
Authors
Tamir Tassa, The Open University of Israel
Arthur Zamarin, The Open University of Israel
Abstract
Secure multiparty computation (MPC) enables mutually distrustful parties to jointly compute functions over private data without revealing their inputs. A central paradigm in MPC is the secret-sharing-based model, where secret sharing underpins the efficient realization of arithmetic, comparison, numerical, and Boolean operations on shares of private inputs. In this paper, we systematize protocols for these operations, with particular attention to two foundational contributions \cite{ChidaGHIKLN18,NO07} that devised secure multiplication and comparison. Our survey provides a unified, self-contained exposition that highlights the composability, performance trade-offs, and implementation choices of these protocols. We further demonstrate how they support practical privacy-preserving systems, including recommender systems, distributed optimization platforms, and e-voting infrastructures. By clarifying the protocol landscape and connecting it to deployed and emerging applications, we identify concrete avenues for improving efficiency, scalability, and integration into real-world MPC frameworks. Our goal is to bridge theory and practice, equipping both researchers and practitioners with a deeper understanding of secret-sharing-based MPC as a foundation for privacy technologies.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
A note on mutual correlated agreement for Reed-Solomon codes(2025/2110)
Authors
Ulrich Haböck, StarkWare Industries LTD
Abstract
We outline how to generalize the Guruswami-Sudan list decoder anal-
ysis from Ben–Sasson, Carmon, Ishai, Kopparty and Saraf [BCI+20] in
order to obtain a “global” proximity gap, called mutual correlated agree-
ment in Arnon, Chiesa, Fenzi and Yogev [WHIR 2024], or strong correlated agreement in Zeilberger [Khatam 24].(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Secure Lookup Tables: Faster, Leaner, and More General(2025/2109)
Authors
Chongrong Li, Shanghai Jiao Tong University
Pengfei Zhu, Tsinghua University
Yun Li, Ant Group
Zhanpeng Guo, Ant Group
Jingyu Li, Ant Group
Yuncong Hu, Shanghai Jiao Tong University
Zhicong Huang, Ant Group
Cheng Hong, Ant Group
Abstract
Secure lookup table (LUT) protocols allow retrieving values from a table at secret indices, and have become a promising approach for the secure evaluation of non-linear functions. Most existing LUT protocols target the two-party setting, where the best protocols achieve a communication cost of $O(N)$ for a table of size $N$. MAESTRO (Morita et al., USENIX Security 2025) represents the state-of-the-art LUT protocol for AES in the three-party honest-majority setting, with a communication cost of $O(N^{1/2})$; malicious security is achieved with distributed zero-knowledge proofs. However, it only supports single-input tables over characteristic-2 fields $\mathbb{F}_{2^k}$ and lacks support for multi-input tables over rings $\mathbb{Z}_{2^k}$, which are more widely used in modern computation. Moreover, the $O(N^{1/2})$ cost remains expensive for large-scale applications; their efficient distributed zero-knowledge proofs are specialized for AES and cannot be easily applied to $\mathbb{Z}_{2^k}$.
In this work, we present MARLUT, a new generalized and optimized LUT construction supporting multi-input tables over both rings $\mathbb{Z}_{2^k}$ and fields $\mathbb{F}_{2^k}$ with malicious security. We achieve this by (1) extending the semi-honest LUT protocol from MAESTRO, utilizing high-dimensional tensors to reduce its communication cost to $O(N^{1/3})$, and (2) designing a new distributed zero-knowledge proof for inner-product relations over $\mathbb{Z}_{2^k}$. Our distributed zero-knowledge proof is more efficient than the state-of-the-art work (Li et al., CCS 2024) and may be of independent interest. Experiments show that on a table of size $2^{16}$, our semi-honest LUT protocol reduces the offline computational and communication cost by a factor of $5.95$ and $3.23$, respectively. Our distributed zero-knowledge proofs show up to $7.07\times$ and $4.97\times$ speedups over the state-of-the-art protocol on ring $\mathbb{Z}_{2^8}$ and $\mathbb{Z}_{2^{16}}$, respectively.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
The Grain Family of Stream Ciphers: an Abstraction, Strengthening of Components and New Concrete Instantiations(2025/2108)
Authors
Palash Sarkar, Indian Statistical Institute
Abstract
The first contribution of the paper is to put forward an abstract definition of the Grain family of stream ciphers which formalises the different components that are required to specify a particular member of the family. Our second contribution is to provide new and strengthened definitions of the components. These include definining new classes of nonlinear Boolean functions, improved definition of the state update function during initialisation, choice of the tap positions, and the possibility of the linear feedback shift register being smaller than the nonlinear feedback shift register. The third contribution of the paper is to put forward seven concrete proposals of stream ciphers by suitably instantiating the abstract family, one at the 80-bit security level, and two each at the 128-bit, 192-bit, and the 256-bit security levels. At the 80-bit security level, compared to the well known Grain~v1, the new proposal uses Boolean functions with improved cryptographic properties \textit{and} an overall lower gate count. At the 128-bit level, compared to ISO/IEC standard Grain-128a, the new proposals use Boolean functions with improved cryptographic properties; one of the proposals require a few extra gates, while the other has an overall lower gate count. At the 192-bit, and the 256-bit security levels, there are no proposals in the literature with smaller gate counts.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Pratima Jana, Indian Institute of Technology Kharagpur
Ratna Dutta, Indian Institute of Technology Kharagpur
Abstract
Password-based Authenticated Key Exchange (${\sf PAKE}$) is a widely acknowledged, promising security mechanism for establishing secure communication between devices. It enables two parties to mutually authenticate each other over insecure networks and generate a session key using a low-entropy password. However, the existing $\mathsf{PAKE}$ protocols encounter significant challenges concerning both security and efficiency in the context of the \textit{Internet of Things} (IoT). In response to these challenges, we contribute to the advancement of post-quantum secure $\mathsf{PAKE}$ protocols tailored for IoT applications, enriching the existing landscape. In this study, we introduce two novel protocols, $\mathsf{PAKE}$-\textup{I} and $\mathsf{PAKE}$-\textup{II}, designed to address these concerns and enhance the security standards of $\mathsf{PAKE}$ protocol. While $\mathsf{PAKE}$-\textup{I} is secure under lattice-based hardness assumptions, $\mathsf{PAKE}$-\textup{II} derives its security from isogeny-based hard problems. Our lattice-based protocol $\mathsf{PAKE}$-\textup{I} is secure based on the \textit{Pairing with Errors} ($\mathsf{PWE}$) assumption and the \textit{Decision Ring Learning with Errors} ($\mathsf{DRLWE}$) assumption and our isogeny-based protocol $\mathsf{PAKE}$-\textup{II} is secure based on the hardness of the \textit{Group Action Inverse Problem} ($\mathsf{GAIP}$) and the \textit{Commutative SuperSingular Diffie-Hellman} ($\mathsf{CSSDH}$) problem in the Random Oracle Model $(\mathsf{ROM})$. We present a comprehensive security proof in a conventional game-based indistinguishability security model that addresses offline dictionary attacks, replay attacks, compromise attacks for both parties (client and server) and perfect forward secrecy. Additionally, our proposed $\mathsf{PAKE}$ protocols are the first post-quantum secure $\mathsf{PAKE}$s that achieve identity privacy and resistance to pre-computation attacks. Through rigorous performance evaluations, the paper demonstrates that the proposed $\mathsf{PAKE}$ schemes are ultralight and exhibit notable advantages in terms of total computation cost and enhanced security properties when compared to the existing protocols. More positively, both the proposed $\mathsf{PAKE}$ are optimal in the sense that they achieve mutual authentication explicitly in only three rounds which is the least number of rounds required for acquiring mutual authentication between two parties.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
SoK: Blockchain Oracles Between Theory and Practice(2025/2106)
Authors
Colin Finkbeiner, University of Connecticut
Ghada Almashaqbeh, University of Connecticut
Abstract
Smart contract-based decentralized applications (dApps) have become an ever-growing way to facilitate complex on-chain operations. Oracle services strengthened this trend by enabling dApps to access real-world data and respond to events happening outside the blockchain ecosystem. A large number of academic and industrial oracle solutions have emerged, capturing various designs, capabilities, and security assumptions/guarantees. This rapid development makes it challenging to comprehend the landscape of oracles, understand their trade-offs, and build on them.
To address these challenges, we develop a systematization of knowledge for blockchain oracle services. To the best of our knowledge, our work is the first to provide extensive study of oracles while empirically investigating their capabilities in practice. After examining the general design framework of oracles, we develop a multi-dimensional systematization framework assessing existing solutions based on their capabilities, trust and security assumption/guarantees, and their underlying design architecture. To further aid in this assessment, we conduct a number of empirical experiments to examine oracle deployed in practice, thus offering additional insights about their deployment maturity, usage popularity, performance, and ease-of-use. We go on to distill a number of insights and gaps, thus providing a guide for practitioners (on the use of these oracles) and researchers (by highlighting gaps and open problems).(show hidden)
Publication status: Preprint. First uploaded: , last revision:
A Lattice-Based Puncturable Attribute-Based Proxy Re-Encryption with HRA Security and Switchable Tags for Cloud Data Sharing(2025/2105)
Authors
Tianqiao Zhang, Huaibei Normal University
Mingming Jiang, Huaibei Normal University
Fucai Luo, Zhejiang Gongshang University
Yuyan Guo, Huaibei Normal University
Jinqiu Hou, Anhui University of Finance and Economics
Abstract
With the rapid advancement of cloud computing technology, outsourcing massive datasets to cloud servers has become a prominent trend, making secure and efficient data sharing mechanisms a critical requirement. Attribute-based proxy re-encryption (ABPRE) has emerged as an ideal solution due to its support for fine-grained, one-to-many access control and robust ciphertext transformation capabilities. However, existing ABPRE schemes still exhibit shortcomings in addressing forward security issues caused by long-term private key leakage, threats from quantum computer attacks, and vulnerabilities to honest re-encryption attacks (HRA). To simultaneously resolve these challenges, this paper introduces a novel cryptographic primitive termed puncturable attribute-based proxy re-encryption with switchable tags (PABPRE-ST), constructing a secure cloud data sharing scheme that supports fine-grained revocation. By integrating puncturable encryption (PE) mechanisms into the ABPRE framework, the scheme achieves fine-grained ciphertext revocation based on tags. In PABPRE-ST, data owners embed tags into ciphertexts, enabling data users to puncture specific tags and thereby revoke access to corresponding ciphertexts at a granular level. Furthermore, the scheme allows delegators to switch ciphertext tags, enhancing sharing flexibility. We formalize the security definitions for the proposed puncturable attribute-based proxy re-encryption scheme and prove its security under the learning with errors (LWE) assumption, which is widely believed to be resistant to quantum computer attacks. Security analysis demonstrates that the proposed scheme achieves HRA security in the standard model.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Quantum Voting Protocol from Classical Assumptions(2025/2104)
Authors
Tingyu Ge, Shandong University
Mingqiang Wang, Shandong University
Xiaolei Wang, Shandong University
Xinyuan Zhao, Macau University of Science and Technology
Xuanxuan Xiao, Macau University of Science and Technology
Abstract
Quantum voting allows us to design voting scheme by quantum mechanics. The existing quantum voting protocols mainly use quantum entangled states. However, the existing protocols rarely consider the problem of repeated voting and tampered voting by malicious voters, and hybrid quantum voting protocols have not been discussed. In this paper, we use EFI pairs (Entity-Friendly Integer pairs) instead of quantum entangled states to address the shortage of existing protocols, and propose a new quantum voting protocol. Our protocol is structured to avoid repeated voting by any voter, and can prevent the leakage of voters' voting information. The security of our protocol can be finally reduced to a classical assumption i.e. BQP = QMA. Combined with quantum key distribution (QKD), we further optimize the protocol to prevent malicious adversaries from interfering with the final voting results. Moreover, we use extended noisy trapdoor claw-free function (ENTCF) to construct the first hybrid quantum voting protocol, which allows a classical voter to interact with a quantum center through a classical channel to complete the voting process.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Threshold Batched Identity-Based Encryption from Pairings in the Plain Model(2025/2103)
Authors
Junqing Gong, East China Normal University , Shanghai Qi Zhi Institute
Brent Waters, The University of Texas at Austin , NTT Research
Hoeteck Wee, NTT Research
David J. Wu, The University of Texas at Austin
Abstract
In a batched identity-based encryption (IBE) scheme, ciphertexts are associated with a batch label $\mathsf{tag}^*$ and an identity $\mathsf{id}^*$ while secret keys are associated with a batch label $\mathsf{tag}$ and a set of identities $S$. Decryption is possible whenever $\mathsf{tag} = \mathsf{tag}^*$ and $\mathsf{id}^* \in S$. The primary efficiency property in a batched IBE scheme is that the size of the decryption key for a set $S$ should be independent of the size of $S$. Batched IBE schemes provide an elegant cryptographic mechanism to support encrypted memory pools in blockchain applications.
In this work, we introduce a new algebraic framework for building pairing-based batched IBE. Our framework gives the following:
First, we obtain a selectively-secure batched IBE scheme under a $q$-type assumption in the plain model. Both the ciphertext and the secret key consist of a constant number of group elements. This is the first pairing-based batched IBE scheme in the plain model. Previous pairing-based schemes relied on the generic group model and the random oracle model.
Next, we show how to extend our base scheme to a threshold batched IBE scheme with silent setup. In this setting, users independently choose their own public and private keys, and there is a non-interactive procedure to derive the master public key (for a threshold batched IBE scheme) for a group of users from their individual public keys. We obtain a statically-secure threshold batched IBE scheme with silent setup from a $q$-type assumption in the plain model. As before, ciphertexts and secret keys in this scheme contain a constant number of group elements. Previous pairing-based constructions of threshold batched IBE with silent setup relied on the generic group model, could only support a polynomial number of identities (where the size of the public parameters scaled linearly with this bound), and ciphertexts contained $O(\lambda / \log \lambda)$ group elements, where $\lambda$ is the security parameter.
Finally, we show that if we work in the generic group model, then we obtain a (threshold) batched IBE scheme with shorter ciphertexts (by 1 group element) than all previous pairing-based constructions (and without impacting the size of the secret key).
Our constructions rely on classic algebraic techniques underlying pairing-based IBE and do not rely on the signature-based witness encryption viewpoint taken in previous works.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
A Graph-Theoretic Framework for Randomness Optimization in First-Order Masked Circuits(2025/2102)
Authors
Dilip Kumar S. V., COSIC, KU Leuven
Benedikt Gierlichs, COSIC, KU Leuven
Ingrid Verbauwhede, COSIC, KU Leuven
Abstract
We present a generic, automatable framework to reduce the demand for fresh randomness in first-order masked circuits while preserving security in the glitch-extended probing model. The method analyzes the flow of randomness through a circuit to establish security rules based on the glitch-extended probing model. These rules are then encoded as an interference graph, transforming the optimization challenge into a graph coloring problem, which is solved efficiently with a DSATUR heuristic. Crucially, the optimization only rewires randomness inputs without altering core logic, ensuring seamless integration into standard EDA flows and applicability to various gadgets like DOM-indep (Domain-Oriented Masking) and HPC (Hardware Private Circuits). On 32-bit adder architectures, the framework substantially reduces randomness requirements by 79–90%; for instance, the Kogge–Stone adder's requirement of 259 unique random inputs is reduced to 27. All optimized designs were evaluated using PROLEAD, with the leakage results indicating compliance with first-order glitch-extended probing security.(show hidden)
Publication status: Published elsewhere. To appear in DATE 2026 (Design, Automation and Test in Europe).First uploaded: , last revision:
Fault Attacks against UOV-based Signatures(2025/2101)
Authors
Sven Bauer, Siemens AG, Foundational Technologies
Fabrizio De Santis, Siemens AG, Foundational Technologies
Kristjane Koleci, Siemens AG, Foundational Technologies
Abstract
The Unbalanced Oil and Vinegar (UOV) construction is the foundation of several post-quantum digital signature algorithms currently under consideration in NIST's standardization process for additional post-quantum digital signature schemes. This paper introduces new single fault injection attacks against the signing procedure of deterministic variants of signature schemes based on the UOV construction. We show how these attacks can be applied to attack MAYO and PROV, two signature schemes submitted to the NIST call for additional post-quantum signature schemes. The attacks are demonstrated with reference implementations that run on an ARM Cortex-M4 processor. Our attacks do not require precise triggering or precise fault injection capabilities. Any type of fault in large portions of the code has the potential to result in successful key recovery. We demonstrate our attacks with very cheap equipment and simple clock glitching techniques, enabling the recovery of the secret key with either two faulty signatures or one correct signature and one faulty signature in the case of MAYO and one correct signature and two faulty signatures in case of PROV. The fact that our attacks do not require precise fault injection capabilities and can be successful with only a few signatures makes them particularly powerful, hence harmful for the implementation security of post-quantum digital signature schemes.(show hidden)
Publication status: Published elsewhere. Minor revision. FDTC 2025First uploaded: , last revision:
Tag Functions and Their Applications to Lattice-based Signatures and IBEs — Compact Designs and Tighter Security(2025/2100)
Authors
Parhat Abla, Shantou University
Abstract
The existing lattice-based signature and IBE schemes suffer from the non-compactness of
public keys or larger reduction loss in the security analysis. Thus we solve and improve those deficiencies
as follows:
– First, we construct a lattice-based short signature scheme with a compact verification key in the
standard model based on the ring short integer solution (RSIS) assumption. Under the same com-
pactness, the ring modulus of our signature scheme is significantly smaller than the compact sig-
nature scheme of Alperin-Sheriff (PKC 2015). More importantly, our signature scheme achieves
better reduction loss than all the previous confined guessing-based signatures. In other words, our
signature scheme achieves better security and efficiency simultaneously.
– Secondly, we further design a short signature scheme with a nearly compact public key size and an
even smaller reduction loss. Our second signature scheme achieves even better reduction loss than
our first signature scheme yet at the cost of increasing the public key to a super-constant number
of ring vectors.
– Last but not least, we construct an adaptively secure compact IBE scheme from the lattice as-
sumptions and the truncation collision-resistant hash functions (TCRHF) introduced by Jager and
Kurek (ASIACRYPT 2018). Note that the previous TCRHF-based IBE schemes are not even close
to compactness.
The above improvements mainly benefited from our compact design of the tag functions and their more
compact homomorphic evaluations. We also believe that our newly designed tag function may find new
applications in designing other cryptographic schemes, like ABE and others.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
A Lattice-based Designated Verifier zkSNARK from Standard Assumptions(2025/2099)
Authors
Mohammad Sadegh Ahmadi, Sharif University of Technology
Taraneh Eghlidos, Sharif University of Technology
Behzad Abdolmaleki, University of Sheffield
Ngoc Khanh Nguyen, King's College London
Abstract
Designated Verifier zero-knowledge Succinct Non-Interactive Arguments of Knowledge (DV-zkSNARKs) are cryptographic argument systems in which the ability to verify proofs is restricted to a designated verifier. Unlike publicly verifiable zkSNARKs, these constructions ensure that only an authorized party can validate the correctness of the proof. Existing lattice-based DV-zkSNARK constructions typically rely on either linear-only encryption (LOE) or linear targeted malleability (LTM). The former does not guarantee security against quantum adversaries, while the latter restricts knowledge soundness to the non-adaptive setting. To address these limitations, we propose an inner product argument system that relies solely on the hardness of the Module Short Integer Solution (MSIS) assumption and achieves knowledge soundness in the random oracle model. This construction enables a designated verifier, holding a secret key, to succinctly verify inner product of a committed witness with an arbitrary vector. By combining our argument system with a linear probabilistic checkable proof (LPCP) compiler, to the best of our knowledge, we obtain the first DV-zkSNARK construction based on standard assumptions. Our implementation achieves prover and verification times comparable to the state of the art, while reducing public parameter size by a factor of 10, at the cost of a 2.5× increase in proof size.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Optical computing of zero-knowledge proof with single-pixel imaging(2025/2098)
Authors
Wei Huang, Great Bay University
Shuming Jiao, Great Bay University
Huichang Guan, BYD Auto Industry Co. Ltd.
Huisi Miao, Xiangtan University
Chao Wang, Great Bay University
Abstract
Optical computing has garnered significant attention in recent years due to its high-speed parallel processing and low power consumption capabilities. It has the potential to replace traditional electronic components and systems for various computation tasks. Among these applications, leveraging optical techniques to address information security issues has emerged as a critical research topic. However, current attempts are predominantly focused on areas such as image encryption and information hiding, with limited exploration of other modern information security concepts, including zero-knowledge proof (ZKP). In this paper, we propose an optical ZKP method based on single-pixel imaging (SPI). By utilizing the flexibility of SPI, our proposed approach can directly acquire randomly permuted results of the source problem's solution in the form of encoded images, thereby encrypting and verifying the original solution. ZKP for the source problem can be realized with optical computing based on a proving protocol without disclosing additional information. Simulated and experimental results show that our proposed method can be effectively applied to two typical ZKP problems: Sudoku and Hamiltonian cycle problem.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Hash-Based Blind Signatures: First Steps(2025/2097)
Authors
Javier Herranz, Dept. Matemàtiques, Universitat Politècnica de Catalunya
Hugo Louiso, Université de Bordeaux
Abstract
Hash-based signatures are a strong candidate for post-quantum scenarios requiring authentication and integrity. Their security relies only on (well-studied) properties of hash functions, so they may be thought as being more robust than other schemes that (today) resist quantum attacks, like those based on lattices, coding or isogenies.
Recent works are also studying hash-based signature schemes with additional properties, like group, ring, threshold, or aggregate signature schemes. In this work we do the same for the important case of blind signatures. We describe a possible hash-based instantiation of Fischlin's generic scheme, we motivate our choices and we finally give some benchmarks for running times and memory requirements, resulting from our C implementation.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Laser Fault Injection Attack on the eXtended Merkle Signature Scheme(2025/2096)
Authors
Alexander Wagner, Fraunhofer Institute for Applied and Integrated Security
Marc Schink, Fraunhofer Institute for Applied and Integrated Security
Silvan Streit, Fraunhofer Institute for Applied and Integrated Security
Dominik Klein, Federal Office for Information Security
Sven Freud, Federal Office for Information Security
Abstract
The interest in hash-based signatures (HBS) has increased since the need for post-quantum cryptography (PQC) emerged that could withstand attacks by quantum computers. Since their standardization, stateful HBS algorithms have been deployed in several products ranging from embedded devices up to servers.
In practice, they are most applicable to verify the integrity and authenticity of data that rarely changes, such as the firmware of embedded devices. The verification procedure then takes place during a secure boot or firmware update process. In past works, the research community has investigated hardware and software optimizations for this use case and vendors brought forward products.
In this study, we practically evaluate a fault attack on the Winternitz One-Time Signature (WOTS) scheme. The attack can be mounted on different HBS schemes, such as LMS, XMSS, and SPHINCS+. Both, the verification as well as the signing operation can be targeted.
The study describes the preparation and implementation of the attack on a standard microcontroller as well as the difficulties the attacker has to overcome. Additionally it presents a countermeasure, which is easy to implement and can increase the effort for an attacker significantly.(show hidden)
Publication status: Published elsewhere. https://www.bsi.bund.de/EN/Service-Navi/Publikationen/Studien/LFI_Attack_XMSS/LFI_Attack_XMSS.html?nn=1022786First uploaded: , last revision:
FPS: Flexible Payment System(2025/2095)
Authors
Adithya Bhat, Visa (United States)
Srinivasan Raghuraman, Visa (United States) , Massachusetts Institute of Technology
Panagiotis Chatzigiannis, Visa (United States)
Duc V Le, Visa (United States)
Mohsen Minaei, Visa (United States)
Abstract
Existing payment systems make fixed trade-offs between performance and security assumptions. Traditional centralized systems like Visa assume synchronous networks and crash faults to achieve high throughput, while blockchain-based systems (e.g., Algorand, Aptos) adopt Byzantine fault tolerance and partial synchrony for stronger security at the cost of performance. This rigid approach forces all users to accept the same security-performance trade-off regardless of their individual trust and threat models.
We present a flexible payment system where clients independently choose assumptions about (i) network timing (bounded or partial synchrony), (ii) corruption (static or adaptive), and (iii) faults (crash or Byzantine), supporting eight assumption combinations simultaneously. Unlike traditional systems requiring consensus, our approach uses a novel flexible variant of consistent broadcast where clients external to the protocol verify delivery through cryptographic proofs, eliminating the need for global ordering. We implemented our system in Rust and demonstrated that clients choosing partially synchronous network and crash assumptions achieve $+242.1\%$ higher throughput and $+70.4\%$ better latency compared to clients with synchronous network and Byzantine assumptions, confirming that our system enables users to optimize their individual security-performance trade-offs.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Vega: Low-Latency Zero-Knowledge Proofs over Existing Credentials(2025/2094)
Authors
Darya Kaviani, University of California, Berkeley
Srinath Setty, Microsoft Research
Abstract
As digital identity verification becomes increasingly pervasive, existing privacy-preserving approaches are still limited by complex circuit designs, large proof sizes, trusted setups, or high latency. We present Vega, a practical zero-knowledge proof system that proves statements about existing credentials without revealing anything else. Vega is simple, does not require a trusted setup, and is more efficient than the prior state-of-the-art: for a 1920-byte credential, Vega achieves 212 ms proving time, 51 ms verification time, 150 kB proofs, and a 436 kB proving key. At the heart of Vega are two principles that together enable a lightweight proof system that pays only for what it needs. First, fold-and-reuse proving exploits repetition and folding opportunities (i) across presentations, by pushing repeated work to a rerandomizable precomputation; (ii) across uniform hashing steps, by folding many steps into a single step; and (iii) for zero-knowledge, by folding the public-coin transcript with a random one. Second, lookup-centric arithmetization extracts relevant values from credential bytes, both for extracting relevant fields without full in-circuit parsing, and to enable length-hiding hashing.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Lore: An LWE-based Key Encapsulation Mechanism with Variable Modulus and CRT Compression(2025/2093)
Authors
Zhongxiang Zheng, School of Computer and Cyber Sciences, Communication University of China, Beijing, China
Anyu Wang, Institute for Advanced Study, BNRist, Tsinghua University, Beijing, China
Chunhuan Zhao, Huawei Technologies, Beijing, China
Guangwu Xu, School of Cyber Science and Technology, Shandong University, QingDao, China
Zhengtao Jiang, School of Computer and Cyber Sciences, Communication University of China, Beijing, China
Sibo Feng, School of Computer and Cyber Sciences, Communication University of China, Beijing, China
Zhichen Yan, School of Computer and Cyber Sciences, Communication University of China, Beijing, China
Shuang Sun, Blockchain Laboratory, ChinaBond Finance and Information Technology Co., LTD, Beijing, China
Xiaoyun Wang, Institute for Advanced Study, BNRist, Tsinghua University, Beijing, China
Abstract
In this paper, we propose a new post-quantum lattice-based IND-CCA2-secure key encapsulation mechanism (KEM) named Lore. The scheme is based on a variant of MLWR problem following LPR structure with two new technologies called variable modulus and CRT compression, which provide a balance of decryption failure probability and ciphertext size. We prove its security in ROM/QROM and provide concrete parameters as well as reference implementation to show that our scheme enjoys high efficiency, compact bandwidth and proper decryption failure rate(DFR) corresponding to its security levels compared with former results.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
CRA and Cryptography: The Story Thus Far(2025/2092)
Authors
Markku-Juhani O. Saarinen, Tampere University, Finland
Abstract
We report on our experiences with the ongoing European standardisation efforts related to the EU Cyber Resilience Act (CRA) and provide interim (November 2025) estimates on the direction that European cryptography regulation may take, particularly concerning the algorithm ``allow list'' and PQC transition requirements in products.
We also outline some of the risks associated with the partially closed standardisation process, including active impact minimisation by vendors concerned with engineering costs, a lack of public review leading to lower technical quality, and an increased potential for backdoors.
The Cyber Resilience Act came into effect in December 2024, and its obligations will fully take effect for makers of ``products with digital elements'' from 2027. CRA compliance is a requirement for obtaining the CE mark and a prerequisite for selling products in the European Single Market, which comprises approximately 450 million consumers. The CRA has a wide-ranging set of security requirements, including security patching and the use of cryptography (data integrity, confidentiality for data at rest and data in transit). However, the Cyber Resilience Act itself is a legal text devoid of technical detail -- it does not specify the type of cryptography deemed appropriate to satisfy its requirements.
The technical implications of CRA are being detailed in approximately 40 new standards from the three European standardisation organisations, CEN, CENELEC, and ETSI. While the resulting ETSI standards can be expected to be available for free even in the drafting stage, the CEN and CENELEC standards will probably require a per-reader license fee. This, despite recent legal rulings asserting that product security and safety standards are part of EU law due to their legal effects.
Taking a recent (2024) example of cryptographic requirements in such standards, we observe that the definitions and language in the Radio Equipment Directive (RED DA) harmonised standard (EN 18031 series) may allow vendors to take an approach where weak cryptography is considered ``best practice'' right until exploitation is feasible.
Recognising recent developments such as the EU Post-Quantum Cryptography transition roadmap, many CRA standardisation working groups are moving towards a ``State-of-the-Art Cryptography'' (SOTA Cryptography) model where approved mechanism listings are published by the European Cybersecurity Certification Group (ECCG). CRA-compliant products may still support other cryptographic mechanisms, but only SOTA is permitted as a safe default for Internet-connected products.(show hidden)
Publication status: Published elsewhere. Minor revision. SSR 2025 -- Security Standardisation Research Conference. Passau, Germany, 04-05 December 2025. https://www.uni-passau.de/ssr2025First uploaded: , last revision:
Efficient and Proof-of-Useful-Work Friendly Local-Search for Distributed Consensus(2025/2091)
Authors
Matthias Fitzi, IOG, Switzerland
Aggelos Kiayias, University of Edinburgh , IOG, UK
Laurent Michel, University of Connecticut
Giorgos Panagiotakos, IOG, Greece
Alexander Russell, University of Connecticut , IOG, USA
Abstract
Blockchain protocols based on the popular ``Proof-of-Work'' mechanism
yield public transaction ledgers maintained by a group of distributed
participants who solve computationally hard puzzles to earn the right
to add a block.
The success and widespread adoption of this mechanism has led to
staggering energy consumption devoted to solving such (otherwise)
``useless'' puzzles. While the environmental impacts of the framework have
been widely criticized, this has been the dominant distributed ledger
paradigm for years.
The Ofelimos ``Proof-of-Useful-Work'' protocol (Fitzi et al.,
CRYPTO 2022) addressed this by establishing that useful
combinatorial problems could replace the conventional hashing puzzles,
yielding a provably secure blockchain that meaningfully utilizes the
computational work that underlies the protocol.
The usefulness to wastefulness ratio of Ofelimos hinges on the properties of its underlying generic distributed local-search algorithm---Doubly Parallel Local Search (DPLS). We observe that this search procedure is particularly wasteful when exploring steep regions of the solution
space.
To address this issue, we introduce Frequently Rerandomized Local
Search (FRLS), a new generic distributed local search algorithm that
we show to be consistent with the Ofelimos architecture. While this
algorithm retains ledger security, we show that it also provides compelling
performance on benchmark problems arising in practice: Concretely, state-of-art
local-search algorithms for cumulative scheduling and warehouse
location can be directly adapted to FRLS and we experimentally
demonstrate the efficiency of the resulting algorithms.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Quantum Grover Attack on MIBS(2025/2090)
Authors
Hasan Ozgur Cildiroglu, Ankara University , Middle East Technical University
Harun Basmaci, Ankara University
Oguz Yayla, Middle East Technical University
Abstract
The advent of quantum computing necessitates a rigorous reassessment of classical cryptographic primitives, particularly lightweight block ciphers (LBCs) deployed in resource-constrained environments. This work presents a comprehensive quantum implementation and security analysis of the Feistel-based LBC MIBS against quantum cryptanalysis. Using the inherent reversibility of its structure, we develop a novel ancilla-free quantum circuit that optimizes qubit count and depth. For MIBS-64 and MIBS-80, our implementation achieves quantum costs of 23,371 and 24,363, requiring 128 and 144 qubits, respectively, with a depth of 4,768. We subsequently quantify the cipher's vulnerability to Grover’s key-search algorithm under the NIST PQC security constraint $\texttt{MAXDEPTH}$. By constructing Grover oracles using inner parallelization with multiple plaintext-ciphertext pairs to suppress false positives, we demonstrate total quantum attack costs of approximately $2^{94}$ for MIBS-64 and $2^{111}$ for MIBS-80. These values fall below NIST’s Level-1 security threshold ($2^{170}$), confirming the susceptibility of both MIBS variants to quantum key-recovery attacks despite their classical lightweight efficiency.(show hidden)
Publication status: Preprint. First uploaded: , last revision:
Traceable Bottom-Up Secret Sharing and Law & Order on Community Social Key Recovery (Full Version)(2025/2089)
Authors
Rittwik Hajra, Indian Statistical Institute, Kolkata, India
Subha Kar, Indian Statistical Institute, Kolkata, India
Pratyay Mukherjee, Indian Statistical Institute, Kolkata, India , Hashgraph
Soumit Pal, Indian Statistical Institute, Kolkata, India
Abstract
A recent work by Kate et al. [EPRINT 2025] proposes a community-based social recovery scheme (SKR), where key-owners can use a subset of other community members as guardians, and in exchange, they play guardians to support other participants' key recovery. Their construction relies on a new concept called bottom-up secret sharing (BUSS). However, they do not consider a crucial feature, called traceability, which ensures that if more than a threshold number of the guardians collude, at least some colluders' identities can be traced -- thereby deterring participants from colluding. In this paper, we incorporate traceability into the community social key recovery as an important feature.
We first introduce the notion of traceable BUSS, which allows tracing colluders by accessing a reconstruction box. Then, extending the work of Boneh et al. [CRYPTO 2024], we propose the first traceable BUSS construction. Finally, we show how to generically use a traceable BUSS scheme to construct a traceable SKR in the aforementioned community setting. Overall, this is the first scheme combining decentralized key management with traceability, marrying BUSS’s scalability with the deterrence of traceable secret sharing.(show hidden)
Publication status: Published elsewhere. Minor revision. Indocrypt 2025First uploaded: , last revision:
UP TO 50% OFF: Efficient Implementation of Polynomial Masking(2025/2088)
Authors
Jorge Andresen, University of Luebeck
Paula Arnold, University of Luebeck
Sebastian Berndt, Technische Hochschule Lübeck , University of Luebeck
Thomas Eisenbarth, University of Luebeck
Sebastian Faust, Technical University of Darmstadt
Marc Gourjon, Max Planck Institute for Security and Privacy
Eric Landthaler, University of Luebeck
Elena Micheli, Technical University of Darmstadt
Maximilian Orlt, UCLouvain , Technical University of Darmstadt
Pajam Pauls, University of Luebeck
Kathrin Wirschem, Technical University of Darmstadt
Liang Zhao, Technical University of Darmstadt
Abstract
While passive probing attacks and active fault attacks have been studied for multiple decades, research has only started to consider combined attacks that use both probes and faults relatively recently. During this period, polynomial masking became a promising, provably secure countermeasure to protect cryptographic computations against such combined attacks. Unlike other countermeasures, such as duplicated additive masking, polynomial masking can be implemented using a linear number of shares, as shown by Berndt et al. at CRYPTO '23. Based upon this fact, Arnold et al. noted at CHES '24 that polynomial masking is particularly well-suited for parallel computation. This characteristic is especially effective in scenarios involving multiple circuits with identical structures, such as the 16 SBoxes in AES. Just recently, Faust et al. showed at CHES '25 that one can also incorporate the technique of packed secret sharing into these masking schemes, given that the state-of-the-art polynomial masking scheme is secure against combined attacks.
In this work, we present provably secure advancements regarding this state-of-the-art scheme in both computational and randomness efficiency, reducing the randomness complexity by up to 50% and the computational complexity even more by going from a quadratic term to a linear one for many parameters. Moreover, we present the first implementation of a polynomial masking scheme against combined attacks along with an extensive experimental evaluation for a wide range of parameters and configurations as well as a statistical leakage detection to evaluate the security of the implementation on an Arm Cortex-M processor. Our implementation is publicly available to encourage further research in practical combined resilience.(show hidden)
Publication status: Published by the IACR in TCHES 2026First uploaded: , last revision: