Back to Explorer
Research PaperResearchia:202603.12066[Data Science > Machine Learning]

GAST: Gradient-aligned Sparse Tuning of Large Language Models with Data-layer Selection

Kai Yao

Abstract

Parameter-Efficient Fine-Tuning (PEFT) has become a key strategy for adapting large language models, with recent advances in sparse tuning reducing overhead by selectively updating key parameters or subsets of data. Existing approaches generally focus on two distinct paradigms: layer-selective methods aiming to fine-tune critical layers to minimize computational load, and data-selective methods aiming to select effective training subsets to boost training. However, current methods typically overlook the fact that different data points contribute varying degrees to distinct model layers, and they often discard potentially valuable information from data perceived as of low quality. To address these limitations, we propose Gradient-aligned Sparse Tuning (GAST), an innovative method that simultaneously performs selective fine-tuning at both data and layer dimensions as integral components of a unified optimization strategy. GAST specifically targets redundancy in information by employing a layer-sparse strategy that adaptively selects the most impactful data points for each layer, providing a more comprehensive and sophisticated solution than approaches restricted to a single dimension. Experiments demonstrate that GAST consistently outperforms baseline methods, establishing a promising direction for future research in PEFT strategies.


Source: arXiv:2603.09865v1 - http://arxiv.org/abs/2603.09865v1 PDF: https://arxiv.org/pdf/2603.09865v1 Original Link: http://arxiv.org/abs/2603.09865v1

Submission:3/12/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

GAST: Gradient-aligned Sparse Tuning of Large Language Models with Data-layer Selection | Researchia