Back to Explorer
Research PaperResearchia:202604.09012[Computer Science > Cybersecurity]

On the Price of Privacy for Language Identification and Generation

Xiaoyu Li

Abstract

As large language models (LLMs) are increasingly trained on sensitive user data, understanding the fundamental cost of privacy in language learning becomes essential. We initiate the study of differentially private (DP) language identification and generation in the agnostic statistical setting, establishing algorithms and matching lower bounds that precisely quantify the cost of privacy. For both tasks, approximate (ε,δ)(\varepsilon, δ)-DP with constant ε>0\varepsilon > 0 recovers the non-private error rates: exp(r(n))\exp(-r(n)) for identification (for any r(n)=o(n)r(n) = o(n)) and exp(Ω(n))\exp(-Ω(n)) for generation. Under pure ε\varepsilon-DP, the exponents degrade by a multiplicative factor of min{1,ε}\min\{1, \varepsilon\}, which we show is tight up to constants. Notably, for generation under pure DP with mild assumptions, the upper bound exp(min{1,ε}Ω(n))\exp(-\min\{1,\varepsilon\} \cdot Ω(n)) matches the lower bound up to some constants, establishing an optimal rate. Our results show that the cost of privacy in language learning is surprisingly mild: absent entirely under approximate DP, and exactly a min{1,ε}\min\{1,\varepsilon\} factor in the exponent under pure DP.


Source: arXiv:2604.07238v1 - http://arxiv.org/abs/2604.07238v1 PDF: https://arxiv.org/pdf/2604.07238v1 Original Link: http://arxiv.org/abs/2604.07238v1

Submission:4/9/2026
Comments:0 comments
Subjects:Cybersecurity; Computer Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

On the Price of Privacy for Language Identification and Generation | Researchia