Efficient Multivector Retrieval with Token-Aware Clustering and Hierarchical Indexing
Abstract
Multivector retrieval models achieve state-of-the-art effectiveness through fine-grained token-level representations, but their deployment incurs substantial computational and memory costs. Current solutions, based on the well-known k-means clustering algorithm, group similar vectors together to enable both effective compression and efficient retrieval. However, standard k-means scales poorly with the number of clusters and dataset size, and favours frequent tokens during training while underrep...
Description / Details
Multivector retrieval models achieve state-of-the-art effectiveness through fine-grained token-level representations, but their deployment incurs substantial computational and memory costs. Current solutions, based on the well-known k-means clustering algorithm, group similar vectors together to enable both effective compression and efficient retrieval. However, standard k-means scales poorly with the number of clusters and dataset size, and favours frequent tokens during training while underrepresenting rare, discriminative ones. In this work, we introduce TACHIOM, a multivector retrieval system that exploits token-level structure to significantly accelerate both clustering and retrieval. By accounting for tokens' distribution during centroid allocation, TACHIOM easily scales to millions of centroids, enabling highly accurate document scoring using only centroids, avoiding expensive token-level computation. TACHIOM combines a graph-based index over centroids with an optimized Product Quantization layout for efficient final scoring. Experiments on MS-MARCOv1 and LoTTE show that TACHIOM achieves up to faster clustering than k-means and up to retrieval speedup over state-of-the-art systems while maintaining comparable or superior effectiveness.
Source: arXiv:2604.28142v1 - http://arxiv.org/abs/2604.28142v1 PDF: https://arxiv.org/pdf/2604.28142v1 Original Link: http://arxiv.org/abs/2604.28142v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
May 1, 2026
Data Science
Machine Learning
0