ExplorerData ScienceStatistics
Research PaperResearchia:202604.20025

Adaptive multi-fidelity optimization with fast learning rates

Come Fiegel

Abstract

In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional l...

Submitted: April 20, 2026Subjects: Statistics; Data Science

Description / Details

In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions, and improves previously proven guarantees. We finally empirically show that our algorithm outperforms previous multi-fidelity optimization methods without the knowledge of problem-dependent parameters.


Source: arXiv:2604.16239v1 - http://arxiv.org/abs/2604.16239v1 PDF: https://arxiv.org/pdf/2604.16239v1 Original Link: http://arxiv.org/abs/2604.16239v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 20, 2026
Topic:
Data Science
Area:
Statistics
Comments:
0
Bookmark