Back to Explorer
Research PaperResearchia:202602.12008[Computer Science > Cybersecurity]

Vulnerabilities in Partial TEE-Shielded LLM Inference with Precomputed Noise

Abhishek Saini

Abstract

The deployment of large language models (LLMs) on third-party devices requires new ways to protect model intellectual property. While Trusted Execution Environments (TEEs) offer a promising solution, their performance limits can lead to a critical compromise: using a precomputed, static secret basis to accelerate cryptographic operations. We demonstrate that this mainstream design pattern introduces a classic cryptographic flaw, the reuse of secret keying material, into the system's protocol. We prove its vulnerability with two distinct attacks: First, our attack on a model confidentiality system achieves a full confidentiality break by recovering its secret permutations and model weights. Second, our integrity attack completely bypasses the integrity checks of systems like Soter and TSQP. We demonstrate the practicality of our attacks against state-of-the-art LLMs, recovering a layer's secrets from a LLaMA-3 8B model in about 6 minutes and showing the attack scales to compromise 405B-parameter LLMs across a variety of configurations.


Source: arXiv:2602.11088v1 - http://arxiv.org/abs/2602.11088v1 PDF: https://arxiv.org/pdf/2602.11088v1 Original Link: http://arxiv.org/abs/2602.11088v1

Submission:2/12/2026
Comments:0 comments
Subjects:Cybersecurity; Computer Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Vulnerabilities in Partial TEE-Shielded LLM Inference with Precomputed Noise | Researchia | Researchia