Back to Explorer
Research PaperResearchia:202604.01004[Data Science > Machine Learning]

Reward-Based Online LLM Routing via NeuralUCB

Ming-Hua Tsai

Abstract

This study investigates the use of NeuralUCB for cost-aware large language model (LLM) routing. Existing routing approaches can be broadly grouped into supervised routing methods and partial-feedback methods, each with different tradeoffs in efficiency and adaptivity. We implement a NeuralUCB-based routing policy and evaluate it on RouterBench under a simulated online setting. Experimental results show that the proposed method consistently outperforms random and min-cost baselines in utility reward. Compared with the max-quality reference, our method achieves substantially lower inference cost while maintaining competitive reward. These findings suggest that NeuralUCB is a promising approach for cost-aware LLM routing, while also highlighting remaining challenges in action discrimination and exploration.


Source: arXiv:2603.30035v1 - http://arxiv.org/abs/2603.30035v1 PDF: https://arxiv.org/pdf/2603.30035v1 Original Link: http://arxiv.org/abs/2603.30035v1

Submission:4/1/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Reward-Based Online LLM Routing via NeuralUCB | Researchia