On the Price of Privacy for Language Identification and Generation
Abstract
As large language models (LLMs) are increasingly trained on sensitive user data, understanding the fundamental cost of privacy in language learning becomes essential. We initiate the study of differentially private (DP) language identification and generation in the agnostic statistical setting, establishing algorithms and matching lower bounds that precisely quantify the cost of privacy. For both tasks, approximate -DP with constant recovers the non-private error rates: for identification (for any ) and for generation. Under pure -DP, the exponents degrade by a multiplicative factor of , which we show is tight up to constants. Notably, for generation under pure DP with mild assumptions, the upper bound matches the lower bound up to some constants, establishing an optimal rate. Our results show that the cost of privacy in language learning is surprisingly mild: absent entirely under approximate DP, and exactly a factor in the exponent under pure DP.
Source: arXiv:2604.07238v1 - http://arxiv.org/abs/2604.07238v1 PDF: https://arxiv.org/pdf/2604.07238v1 Original Link: http://arxiv.org/abs/2604.07238v1