ExplorerArtificial IntelligenceAI
Research PaperResearchia:202602.17024

Asynchronous Verified Semantic Caching for Tiered LLM Architectures

Asmit Kumar Singh

Abstract

Large language models (LLMs) now sit in the critical path of search, assistance, and agentic workflows, making semantic caching essential for reducing inference cost and latency. Production deployments typically use a tiered static-dynamic design: a static cache of curated, offline vetted responses mined from logs, backed by a dynamic cache populated online. In practice, both tiers are commonly governed by a single embedding similarity threshold, which induces a hard tradeoff: conservative thres...

Submitted: February 17, 2026Subjects: AI; Artificial Intelligence

Description / Details

Large language models (LLMs) now sit in the critical path of search, assistance, and agentic workflows, making semantic caching essential for reducing inference cost and latency. Production deployments typically use a tiered static-dynamic design: a static cache of curated, offline vetted responses mined from logs, backed by a dynamic cache populated online. In practice, both tiers are commonly governed by a single embedding similarity threshold, which induces a hard tradeoff: conservative thresholds miss safe reuse opportunities, while aggressive thresholds risk serving semantically incorrect responses. We introduce \textbf{Krites}, an asynchronous, LLM-judged caching policy that expands static coverage without changing serving decisions. On the critical path, Krites behaves exactly like a standard static threshold policy. When the nearest static neighbor of the prompt falls just below the static threshold, Krites asynchronously invokes an LLM judge to verify whether the static response is acceptable for the new prompt. Approved matches are promoted into the dynamic cache, allowing future repeats and paraphrases to reuse curated static answers and expanding static reach over time. In trace-driven simulations on conversational and search workloads, Krites increases the fraction of requests served with curated static answers (direct static hits plus verified promotions) by up to 3.9\textbf{3.9} times for conversational traffic and search-style queries relative to tuned baselines, with unchanged critical path latency.


Source: arXiv:2602.13165v1 - http://arxiv.org/abs/2602.13165v1 PDF: https://arxiv.org/pdf/2602.13165v1 Original Link: http://arxiv.org/abs/2602.13165v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 17, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark