Back to Explorer
Research PaperResearchia:202601.29101[Cryptography > Cybersecurity]

Hardware-Triggered Backdoors

Jonas Möller

Abstract

Machine learning models are routinely deployed on a wide range of computing hardware. Although such hardware is typically expected to produce identical results, differences in its design can lead to small numerical variations during inference. In this work, we show that these variations can be exploited to create backdoors in machine learning models. The core idea is to shape the model's decision function such that it yields different predictions for the same input when executed on different hardware. This effect is achieved by locally moving the decision boundary close to a target input and then refining numerical deviations to flip the prediction on selected hardware. We empirically demonstrate that these hardware-triggered backdoors can be created reliably across common GPU accelerators. Our findings reveal a novel attack vector affecting the use of third-party models, and we investigate different defenses to counter this threat.


Source: arXiv:2601.21902v1 - http://arxiv.org/abs/2601.21902v1 PDF: https://arxiv.org/pdf/2601.21902v1 Original Link: http://arxiv.org/abs/2601.21902v1

Submission:1/29/2026
Comments:0 comments
Subjects:Cybersecurity; Cryptography
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!