Privacy Collapse: Benign Fine-Tuning Can Break Contextual Privacy in Language Models Paper • 2601.15220 • Published 2 days ago • 8
Is Multilingual LLM Watermarking Truly Multilingual? A Simple Back-Translation Solution Paper • 2510.18019 • Published Oct 20, 2025 • 18
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers Paper • 2506.15674 • Published Jun 18, 2025 • 2
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models Paper • 2411.00154 • Published Oct 31, 2024 • 1
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Paper • 2402.12991 • Published Feb 20, 2024 • 1
Calibrating Large Language Models Using Their Generations Only Paper • 2403.05973 • Published Mar 9, 2024 • 1
ProPILE: Probing Privacy Leakage in Large Language Models Paper • 2307.01881 • Published Jul 4, 2023 • 2