Back to Explorer
Research PaperResearchia:202602.16003[Data Science > Machine Learning]

Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps

Swati Gupta

Abstract

OMD and its variants give a flexible framework for OCO where the performance depends crucially on the choice of the mirror map. While the geometries underlying OPGD and OEG, both special cases of OMD, are well understood, it remains a challenging open question on how to construct an optimal mirror map for any given constrained set and a general family of loss functions, e.g., sparse losses. Motivated by parameterizing a near-optimal set of mirror maps, we consider a simpler question: is it even possible to obtain polynomial gains in regret by using mirror maps for geometries that interpolate between L1L_1 and L2L_2, which may not be possible by restricting to only OEG (L1L_1) or OPGD (L2L_2). Our main result answers this question positively. We show that mirror maps based on block norms adapt better to the sparsity of loss functions, compared to previous LpL_p (for p[1,2]p \in [1, 2]) interpolations. In particular, we construct a family of online convex optimization instances in Rd\mathbb{R}^d, where block norm-based mirror maps achieve a provable polynomial (in dd) improvement in regret over OEG and OPGD for sparse loss functions. We then turn to the setting in which the sparsity level of the loss functions is unknown. In this case, the choice of geometry itself becomes an online decision problem. We first show that naively switching between OEG and OPGD can incur linear regret, highlighting the intrinsic difficulty of geometry selection. To overcome this issue, we propose a meta-algorithm based on multiplicative weights that dynamically selects among a family of uniform block norms. We show that this approach effectively tunes OMD to the sparsity of the losses, yielding adaptive regret guarantees. Overall, our results demonstrate that online mirror-map selection can significantly enhance the ability of OMD to exploit sparsity in online convex optimization.


Source: arXiv:2602.13177v1 - http://arxiv.org/abs/2602.13177v1 PDF: https://arxiv.org/pdf/2602.13177v1 Original Link: http://arxiv.org/abs/2602.13177v1

Submission:2/16/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps | Researchia | Researchia