Back to Explorer
Research PaperResearchia:202602.11027[Data Science > Statistics]

WildCat: Near-Linear Attention in Theory and Practice

Tobias Schröder

Abstract

We introduce WildCat, a high-accuracy, low-cost approach to compressing the attention mechanism in neural networks. While attention is a staple of modern network architectures, it is also notoriously expensive to deploy due to resource requirements that scale quadratically with the input sequence length nn. WildCat avoids these quadratic costs by only attending over a small weighted coreset. Crucially, we select the coreset using a fast but spectrally-accurate subsampling algorithm -- randomly pivoted Cholesky -- and weight the elements optimally to minimise reconstruction error. Remarkably, given bounded inputs, WildCat approximates exact attention with super-polynomial O(nlog(log(n)))O(n^{-\sqrt{\log(\log(n))}}) error decay while running in near-linear O(n1+o(1))O(n^{1+o(1)}) time. In contrast, prior practical approximations either lack error guarantees or require quadratic runtime to guarantee such high fidelity. We couple this advance with a GPU-optimized PyTorch implementation and a suite of benchmark experiments demonstrating the benefits of WildCat for image generation, image classification, and language model KV cache compression.


Source: arXiv:2602.10056v1 - http://arxiv.org/abs/2602.10056v1 PDF: https://arxiv.org/pdf/2602.10056v1 Original Link: http://arxiv.org/abs/2602.10056v1

Submission:2/11/2026
Comments:0 comments
Subjects:Statistics; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!