Back to Explorer
Research PaperResearchia:202603.06002[Artificial Intelligence > AI]

POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

Zeju Qiu

Abstract

Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantial improvements in throughput and memory efficiency. In our experiments, POET-X enables the pretraining of billion-parameter LLMs on a single Nvidia H100 GPU, and in contrast, standard optimizers such as AdamW run out of memory under the same settings.


Source: arXiv:2603.05500v1 - http://arxiv.org/abs/2603.05500v1 PDF: https://arxiv.org/pdf/2603.05500v1 Original Link: http://arxiv.org/abs/2603.05500v1

Submission:3/6/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation | Researchia