Is Online Linear Optimization Sufficient for Strategic Robustness?
Abstract
We consider bidding in repeated Bayesian first-price auctions. Bidding algorithms that achieve optimal regret have been extensively studied, but their strategic robustness to the seller's manipulation remains relatively underexplored. Bidding algorithms based on no-swap-regret algorithms achieve both desirable properties, but are suboptimal in terms of statistical and computational efficiency. In contrast, online gradient ascent is the only algorithm that achieves regret and strategic robustness [KSS24], where denotes the number of auctions and the number of bids. In this paper, we explore whether simple online linear optimization (OLO) algorithms suffice for bidding algorithms with both desirable properties. Our main result shows that sublinear linearized regret is sufficient for strategic robustness. Specifically, we construct simple black-box reductions that convert any OLO algorithm into a strategically robust no-regret bidding algorithm, in both known and unknown value distribution settings. For the known value distribution case, our reduction yields a bidding algorithm that achieves regret and strategic robustness (with exponential improvement on the -dependence compared to [KSS24]). For the unknown value distribution case, our reduction gives a bidding algorithm with high-probability regret and strategic robustness, while removing the bounded density assumption made in [KSS24].
Source: arXiv:2602.12253v1 - http://arxiv.org/abs/2602.12253v1 PDF: https://arxiv.org/pdf/2602.12253v1 Original Link: http://arxiv.org/abs/2602.12253v1