Back to Explorer
Research PaperResearchia:202601.30036[Biotechnology > Biochemistry]

Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation

Dong Xu

Abstract

Molecular generative models, often employing GPT-style language modeling on molecular string representations, have shown promising capabilities when scaled to large datasets and model sizes. However, it remains unclear and subject to debate whether these models adhere to predictable scaling laws under fixed computational budgets, which is a crucial understanding for optimally allocating resources between model size, data volume, and molecular representation. In this study, we systematically investigate the scaling behavior of molecular language models across both pretraining and downstream tasks. We train 300 models and conduct over 10,000 experiments, rigorously controlling compute budgets while independently varying model size, number of training tokens, and molecular representation. Our results demonstrate clear scaling laws in molecular models for both pretraining and downstream transfer, reveal the substantial impact of molecular representation on performance, and explain previously observed inconsistencies in scaling behavior for molecular generation. Additionally, we publicly release the largest library of molecular language models to date to facilitate future research and development. Code and models are available at https://github.com/SZU-ADDG/MLM-Scaling.


Source: arXiv:2601.22757v1 - http://arxiv.org/abs/2601.22757v1 PDF: https://arxiv.org/pdf/2601.22757v1 Original Article: View on arXiv

Submission:1/30/2026
Comments:0 comments
Subjects:Biochemistry; Biotechnology
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Unveiling Scaling Behaviors in Molecular Language Models: Effects of Model Size, Data, and Representation | Researchia | Researchia