Back to Explorer
Research PaperResearchia:202602.18019[Data Science > Statistics]

Certified Per-Instance Unlearning Using Individual Sensitivity Bounds

Hanna Benarroch

Abstract

Certified machine unlearning can be achieved via noise injection leading to differential privacy guarantees, where noise is calibrated to worst-case sensitivity. Such conservative calibration often results in performance degradation, limiting practical applicability. In this work, we investigate an alternative approach based on adaptive per-instance noise calibration tailored to the individual contribution of each data point to the learned solution. This raises the following challenge: how can one establish formal unlearning guarantees when the mechanism depends on the specific point to be removed? To define individual data point sensitivities in noisy gradient dynamics, we consider the use of per-instance differential privacy. For ridge regression trained via Langevin dynamics, we derive high-probability per-instance sensitivity bounds, yielding certified unlearning with substantially less noise injection. We corroborate our theoretical findings through experiments in linear settings and provide further empirical evidence on the relevance of the approach in deep learning settings.


Source: arXiv:2602.15602v1 - http://arxiv.org/abs/2602.15602v1 PDF: https://arxiv.org/pdf/2602.15602v1 Original Link: http://arxiv.org/abs/2602.15602v1

Submission:2/18/2026
Comments:0 comments
Subjects:Statistics; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Certified Per-Instance Unlearning Using Individual Sensitivity Bounds | Researchia | Researchia