ExplorerArtificial IntelligenceAI
Research PaperResearchia:202604.24070

GiVA: Gradient-Informed Bases for Vector-Based Adaptation

Neeraj Gangwar

Abstract

As model sizes continue to grow, parameter-efficient fine-tuning has emerged as a powerful alternative to full fine-tuning. While LoRA is widely adopted among these methods, recent research has explored vector-based adaptation methods due to their extreme parameter efficiency. However, these methods typically require substantially higher ranks than LoRA to match its performance, leading to increased training costs. This work introduces GiVA, a gradient-based initialization strategy for vector-ba...

Submitted: April 24, 2026Subjects: AI; Artificial Intelligence

Description / Details

As model sizes continue to grow, parameter-efficient fine-tuning has emerged as a powerful alternative to full fine-tuning. While LoRA is widely adopted among these methods, recent research has explored vector-based adaptation methods due to their extreme parameter efficiency. However, these methods typically require substantially higher ranks than LoRA to match its performance, leading to increased training costs. This work introduces GiVA, a gradient-based initialization strategy for vector-based adaptation. It achieves training times comparable to LoRA and maintains the extreme parameter efficiency of vector-based adaptation. We evaluate GiVA across diverse benchmarks, including natural language understanding, natural language generation, and image classification. Experiments show that our approach consistently outperforms or achieves performance competitive with existing vector-based adaptation methods and LoRA while reducing rank requirements by a factor of eight (8×8\times).


Source: arXiv:2604.21901v1 - http://arxiv.org/abs/2604.21901v1 PDF: https://arxiv.org/pdf/2604.21901v1 Original Link: http://arxiv.org/abs/2604.21901v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 24, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark