ExplorerComputational LinguisticsNLP
Research PaperResearchia:202603.05010

Understanding and Mitigating Dataset Corruption in LLM Steering

Cullen Anderson

Abstract

Contrastive steering has been shown as a simple and effective method to adjust the generative behavior of LLMs at inference time. It uses examples of prompt responses with and without a trait to identify a direction in an intermediate activation layer, and then shifts activations in this 1-dimensional subspace. However, despite its growing use in AI safety applications, the robustness of contrastive steering to noisy or adversarial data corruption is poorly understood. We initiate a study of the...

Submitted: March 5, 2026Subjects: NLP; Computational Linguistics

Description / Details

Contrastive steering has been shown as a simple and effective method to adjust the generative behavior of LLMs at inference time. It uses examples of prompt responses with and without a trait to identify a direction in an intermediate activation layer, and then shifts activations in this 1-dimensional subspace. However, despite its growing use in AI safety applications, the robustness of contrastive steering to noisy or adversarial data corruption is poorly understood. We initiate a study of the robustness of this process with respect to corruption of the dataset of examples used to train the steering direction. Our first observation is that contrastive steering is quite robust to a moderate amount of corruption, but unwanted side effects can be clearly and maliciously manifested when a non-trivial fraction of the training data is altered. Second, we analyze the geometry of various types of corruption, and identify some safeguards. Notably, a key step in learning the steering direction involves high-dimensional mean computation, and we show that replacing this step with a recently developed robust mean estimator often mitigates most of the unwanted effects of malicious corruption.


Source: arXiv:2603.03206v1 - http://arxiv.org/abs/2603.03206v1 PDF: https://arxiv.org/pdf/2603.03206v1 Original Link: http://arxiv.org/abs/2603.03206v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 5, 2026
Topic:
Computational Linguistics
Area:
NLP
Comments:
0
Bookmark
Understanding and Mitigating Dataset Corruption in LLM Steering | Researchia