Back to Explorer
Research PaperResearchia:202604.01027[Pharmaceutical Research > Biochemistry]

Sampling at intermediate temperatures is optimal for training large language models in protein structure prediction

L. Ghiringhelli

Abstract

We investigate the parameter space of transformer models trained on protein sequence data using a statistical mechanics framework, sampling the loss landscape at varying temperatures by Langevin dynamics to characterize the low-loss manifold and understand the mechanisms underlying the superior performance of transformers in protein structure prediction. We find that, at variance with feedforward networks, the lack of a first--order--like transition in the loss of the transformer produces a range of intermediate temperatures with good learning properties. We show that the parameters of most layers are highly conserved at these temperatures if the dimension of the embedding is optimal, and we provide an operative way to find this dimension. Finally, we show that the attention matrix is more predictive of the contact maps of the protein at higher temperatures and for higher dimensions of the embedding than those optimal for learning.


Source: arXiv:2603.29529v1 - http://arxiv.org/abs/2603.29529v1 PDF: https://arxiv.org/pdf/2603.29529v1 Original Link: http://arxiv.org/abs/2603.29529v1

Submission:4/1/2026
Comments:0 comments
Subjects:Biochemistry; Pharmaceutical Research
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Sampling at intermediate temperatures is optimal for training large language models in protein structure prediction | Researchia