December 01

Add to Calendar 2025-12-01 12:00:00 2025-12-01 13:00:00 America/New_York Don’t shout “Bingo!” Understanding (and Addressing) the Shortcomings of Enterprise Threat Detection Products Abstract: Update -- We are still awful at preventing data breaches and other cybersecurity incidents. Why are these sophisticated (and costly) commercial threat detection products continuing to fail? In this talk, I'll describe our efforts to better understand, and even address, these failure points. First, I'll provide evidence that the extraordinarily high false alarm rates observed in Endpoint Detection & Response (EDR) products can be eliminated by examining the history of alert-triggering processes. Second, I'll explain how the metrics used to evaluate threat detection products often paint a deeply misleading picture of organizations' security readiness. I will conclude by discussing how our ongoing work seeks to resolve industry shortcomings by providing more principled foundations for threat detection and assessment. Bio: Adam Bates is an Associate Professor at the University of Illinois at Urbana-Champaign, where he studies a broad range of topics in computer security. He is best known for his work on data provenance, the practice of examining suspicious activities on computing systems based on their historical context. Fittingly, Adam also appreciates the historical context of computer security research, regularly forcing students in his courses to read James Anderson's 1972 Computer Security Technology and Planning Study… both volumes. Adam is the recipient of two distinguished paper awards (S&P'23, ESORICS'22) and was the runner-up for the ACM SIGSAC Dissertation Award. His research has been recognized and supported by an NSF SaTC FRONTIER, NSF CISE Research Initiation Initiative (CRII), and NSF CAREER Awards, as well as a gift from the VMWare University Research Fund.    TBD

October 30

Add to Calendar 2025-10-30 15:00:00 2025-10-30 16:30:00 America/New_York Myco: Unlocking Polylogarithmic Accesses in Metadata-Private Messaging Abstract: As billions of people rely on end-to-end encrypted messaging, the exposure of metadata, such as communication timing and participant relationships, continues to deanonymize users. Asynchronous metadata-hiding solutions with strong cryptographic guarantees have historically been bottlenecked by quadratic O(N²) server computation in the number of users N due to reliance on private information retrieval (PIR). We present Myco, a metadata-private messaging system that preserves strong cryptographic guarantees while achieving O(N log² N) efficiency. To achieve this, we depart from PIR and instead introduce an oblivious data structure through which senders and receivers privately communicate. To unlink reads and writes, we instantiate Myco in an asymmetric two-server distributed-trust model where clients write messages to one server tasked with obliviously transmitting these messages to another server, from which clients read. Myco achieves throughput improvements of up to 302x over multi-server and 2,219x over single-server state-of-the-art systems based on PIR.Bio: Darya Kaviani is a second-year PhD student at UC Berkeley, advised by Raluca Ada Popa. Her research interests are in applied cryptography and systems security.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039  TBD

October 20

Add to Calendar 2025-10-20 12:00:00 2025-10-20 13:00:00 America/New_York LLMs unlock new paths to monetizing exploits Abstract: We argue that Large language models (LLMs) will soon alter the economics of cyberattacks. Instead of attacking the most commonly used software and monetizing exploits by targeting the lowest common denominator among victims, LLMs enable adversaries to launch tailored attacks on a user-by-user basis. On the exploitation front, instead of human attackers manually searching for one difficult-to-identify bug in a product with millions of users, LLMs can find thousands of easy-to-identify bugs in products with thousands of users. And on the monetization front, instead of generic ransomware that always performs the same attack (encrypt all your data and request payment to decrypt), an LLM-driven ransomware attack could tailor the ransom demand based on the particular content of each exploited device.We show that these two attacks (and several others) are imminently practical using state-of-the-art LLMs. For example, we show that without any human intervention, an LLM finds highly sensitive personal information in the Enron email dataset (e.g., an executive having an affair with another employee) that could be used for blackmail. While some of our attacks are still too expensive to scale widely today, the incentives to implement these attacks will only increase as LLMs get cheaper. Thus, we argue that LLMs create a need for new defense-in-depth approaches.Bio: Edoardo Debenedetti is fourth year a PhD student in Computer Science at ETH Zurich, advised by Prof. Florian Tramèr. His research focuses on real-world machine learning security and privacy. Most recently, he's been looking into the security of AI agents, working on evaluation frameworks and defenses. He is currently a Research Scientist Intern at Meta and he recently worked as a Student Researcher at Google.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039 TBD

October 06

Add to Calendar 2025-10-06 12:00:00 2025-10-06 13:00:00 America/New_York InsPIRe: Communication-efficient PIR with Silent Preprocessing Abstract:We present InsPIRe that is the first private information retrieval (PIR) construction simultaneously obtaining both high-throughput and low query communication while using silent preprocessing (meaning no offline communication). Prior PIR schemes with both high-throughput and low query communication required substantial offline communication of either downloading a database hint that is 10-100x larger than the communication cost of a single query (such as SimplePIR and DoublePIR [Henzinger et al., USENIX Security 2023]) or streaming the entire database (such as Piano [Zhou et al., S&P 2024]).In contrast, recent works such as YPIR [Menon and Wu, USENIX Security 2024] avoid offline communication at the cost of increasing the query size by 1.8-2x, up to 1-2 MB per query. Our new PIR protocol, InsPIRe, obtains the best of both worlds by obtaining high-throughput and low communication without requiring any offline communication. Compared to YPIR, InsPIRe requires 5x smaller cryptographic keys, requires up to 50\% less online query communication while obtaining up to 25% higher throughput.  We show that InsPIRe enables improvements across a wide range of applications and database shapes including the InterPlanetary File System and private device enrollment.At the core of InsPIRe, we develop a novel ring packing algorithm, InspiRING, for transforming LWE ciphertexts into RLWE ciphertexts. InspiRING is more amenable to the silent preprocessing setting that allows moving the majority of the necessary operations to offline preprocessing. InspiRING only requires two key-switching matrices whereas prior approaches needed logarithmic key-switching matrices. We also show that InspiRING has smaller noise growth and faster packing times than prior works in the setting when the total key-switching material sizes must be small. To further reduce communication costs in the PIR protocol, InsPIRe performs the second level of PIR using homomorphic polynomial evaluation, which only requires one additional ciphertext from the client.Bio:Rasoul is a fourth-year PhD candidate at the University of Waterloo, advised by Florian Kerschbaum, and a member of the CrySP lab. His research focuses on the design and implementation of secure and private computation protocols tailored for real-world applications. In particular, he works on applications of Homomorphic Encryption and Secure Multi-Party Computation, with a focus on Private Information Retrieval. He has also contributed to projects on Private Set Intersection and Differential Privacy. His work has been published in top-tier venues such as IEEE S&P, USENIX Security, ACM CCS, and NDSS. He has also interned at the Private Computing group at Google where he worked on the next generation of PIR protocols.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039 TBD

September 22

Add to Calendar 2025-09-22 12:00:00 2025-09-22 13:00:00 America/New_York Ideal Pseudorandom Error-Correcting Codes with Applications to Watermarking Generative AI Abstract:Motivated by the growing need to identify AI-generated content, we ([CG24]) introduced a powerful new framework for generative AI watermarking. This framework leverages a new cryptographic primitive called a pseudorandom error-correcting code (PRC).A PRC is an error-correcting code with the property that any polynomial number of codewords are pseudorandom to any efficient adversary. We construct PRCs from standard cryptographic assumptions, and in this talk I will give an overview of our construction from subexponential LPN. Since the introduction of PRCs, there has been a flurry of exciting works strengthening their properties and implementing them in practice. I will highlight new work with my collaborators ([AAC+25]) in which we define and construct a notion of an ideal PRC, with stronger robustness and pseudorandomness motivated by applications. Our proof of security uses tools from the analysis of Boolean functions. This is based on works with Sam Gunn, Omar Alrabiah, Prabhanjan Ananth, and Yevgeniy Dodis: [CG24] https://eprint.iacr.org/2024/235.pdf, [AAC+25] https://eprint.iacr.org/2024/1840Bio:Miranda Christ is a final-year Computer Science PhD student in the crypto lab and theory group at Columbia University, where she is co-advised by Tal Malkin and Mihalis Yannakakis. Her research focuses on practically motivated theoretical cryptography. Most recently she has been interested in using tools from cryptography to watermark AI-generated content.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039  TBD

September 08

Add to Calendar 2025-09-08 12:00:00 2025-09-08 13:00:00 America/New_York Planting Statistically Undetectable Backdoors in Deep Neural Networks Abstract: In this talk, I will show how to plant backdoors in a large class of deep neural networks. These backdoors are statistically undetectable in the white-box setting, meaning that the backdoored and honestly trained models are close in total variation distance, even given the full descriptions of the models (e.g., all of the weights). The backdoor provides access to (invariance-based) adversarial examples for every input. However, without the backdoor, no one can generate any such adversarial examples, assuming the worst-case hardness of shortest vector problems on lattices. This talk is based on upcoming work with Andrej Bogdanov and Alon Rosen.Zoom info:   Meeting ID: 945 5603 5878   Password: 865039 TBD