Language of LLMs Itself Shapes Privacy Leakage
Large language models (LLMs) are increasingly deployed in multilingual and sensitive settings, from healthcare and legal assistance to customer support and education. While much of the privacy literature focuses on model size, training data, or attack design, a fundamental question has remained largely unexplored: does the language itself influence privacy leakage in LLMs?