Provable Quantization with Randomized Hadamard Transform
Abstract
Vector quantization via random projection followed by scalar quantization is a fundamental primitive in machine learning, with applications ranging from similarity search to federated learning and KV cache compression. While dense random rotations yield clean theoretical guarantees, they require $Ξ(d^2)$ time. The randomized Hadamard transform $HD$ reduces this cost to $O(d \log d)$, but its discrete structure complicates analysis and leads to weaker or purely empirical compression guarantees. ...
Description / Details
Vector quantization via random projection followed by scalar quantization is a fundamental primitive in machine learning, with applications ranging from similarity search to federated learning and KV cache compression. While dense random rotations yield clean theoretical guarantees, they require time. The randomized Hadamard transform reduces this cost to , but its discrete structure complicates analysis and leads to weaker or purely empirical compression guarantees. In this work, we study a variant of this approach: dithered quantization with a single randomized Hadamard transform. Specifically, the quantizer applies to the input vector and subtracts a random scalar offset before quantizing, injecting additional randomness at negligible cost. We prove that this approach is unbiased and provides mean squared error bounds that asymptotically match those achievable with truly random rotation matrices. In particular, we prove that a dithered version of TurboQuant achieves mean squared error at bits per coordinate, where the term vanishes uniformly over all unit vectors and all dimensions as the number of quantization levels grows.
Source: arXiv:2605.13810v1 - http://arxiv.org/abs/2605.13810v1 PDF: https://arxiv.org/pdf/2605.13810v1 Original Link: http://arxiv.org/abs/2605.13810v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
May 14, 2026
Data Science
Machine Learning
0