Back to Explorer
Research PaperResearchia:202602.04013[Mathematics > Mathematics]

A GPU-accelerated Nonlinear Branch-and-Bound Framework for Sparse Linear Models

Xiang Meng

Abstract

We study exact sparse linear regression with an โ„“0โˆ’โ„“2\ell_0-\ell_2 penalty and develop a branch-and-bound (BnB) algorithm explicitly designed for GPU execution. Starting from a perspective reformulation, we derive an interval relaxation that can be solved by ADMM with closed-form, coordinate-wise updates. We structure these updates so that the main work at each BnB node reduces to batched matrix-vector operations with a shared data matrix, enabling fine-grained parallelism across coordinates and coarse-grained parallelism across many BnB nodes on a single GPU. Feasible solutions (upper bounds) are generated by a projected gradient method on the active support, implemented in a batched fashion so that many candidate supports are updated in parallel on the GPU. We discuss practical design choices such as memory layout, batching strategies, and load balancing across nodes that are crucial for obtaining good utilization on modern GPUs. On synthetic and real high-dimensional datasets, our GPU-based approach achieves clear runtime improvements over a CPU implementation of our method, an existing specialized BnB method, and commercial MIP solvers.


Source: arXiv:2602.04551v1 - http://arxiv.org/abs/2602.04551v1 PDF: https://arxiv.org/pdf/2602.04551v1 Original Article: View on arXiv

Submission:2/4/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!