We've analyzed how popular watermarking methods (KGW, Gumbel) affect language model alignment—revealing critical tradeoffs impacting truthfulness, safety, and helpfulness. We propose "Alignment Resampling," a simple method to mitigate these alignment degradations, with theoretical insights and empirical results.
Excited to share our new paper: "Operationalizing a Threat Model for Red-Teaming Large Language Models"! In it, we present a detailed framework for improving security and robustness of #LLM-based #AI systems.
Abstract: Creating secure and resilient applications with large language models (LLM) requires anticipating, adjusting to, and countering unforeseen threats. Red-teaming has emerged as a critical technique for identifying vulnerabilities in real-world LLM implementations. This paper presents a detailed threat model and provides a systematization of knowledge (SoK) of red-teaming attacks on LLMs. We develop a taxonomy of attacks based on the stages of the LLM development and deployment process and extract various insights from previous research. In addition, we compile methods for defense and practical red-teaming strategies for practitioners. By delineating prominent attack motifs and shedding light on various entry points, this paper provides a framework for improving the security and robustness of LLM-based systems.
Paper: https://huggingface.co/papers/2506.04462
Feedback appreciated!