ExplorerQuantum ComputingQuantum Physics
Research PaperResearchia:202605.01016

Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders

Emma Andrews

Abstract

Machine learning models can learn from data samples to carry out various tasks efficiently. When data samples are adversarially manipulated, such as by insertion of carefully crafted noise, it can cause the model to make mistakes. Quantum machine learning models are also vulnerable to such adversarial attacks, especially in image classification using variational quantum classifiers. While there are promising defenses against these adversarial perturbations, such as training with adversarial samp...

Submitted: May 1, 2026Subjects: Quantum Physics; Quantum Computing

Description / Details

Machine learning models can learn from data samples to carry out various tasks efficiently. When data samples are adversarially manipulated, such as by insertion of carefully crafted noise, it can cause the model to make mistakes. Quantum machine learning models are also vulnerable to such adversarial attacks, especially in image classification using variational quantum classifiers. While there are promising defenses against these adversarial perturbations, such as training with adversarial samples, they face practical limitations. For example, they are not applicable in scenarios where training with adversarial samples is either not possible or can overfit the models on one type of attack. In this paper, we propose an adversarial training-free defense framework that utilizes a quantum autoencoder to purify the adversarial samples through reconstruction. Moreover, our defense framework provides a confidence metric to identify potentially adversarial samples that cannot be purified the quantum autoencoder. Extensive evaluation demonstrates that our defense framework can significantly outperform state-of-the-art in prediction accuracy (up to 68%) under adversarial attacks.


Source: arXiv:2604.28176v1 - http://arxiv.org/abs/2604.28176v1 PDF: https://arxiv.org/pdf/2604.28176v1 Original Link: http://arxiv.org/abs/2604.28176v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 1, 2026
Topic:
Quantum Computing
Area:
Quantum Physics
Comments:
0
Bookmark
Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders | Researchia