Back to Explorer
Research PaperResearchia:202602.20019[Biotechnology > Biology]

Parameter-free representations outperform single-cell foundation models on downstream benchmarks

Huan Souza

Abstract

Single-cell RNA sequencing (scRNA-seq) data exhibit strong and reproducible statistical structure. This has motivated the development of large-scale foundation models, such as TranscriptFormer, that use transformer-based architectures to learn a generative model for gene expression by embedding genes into a latent vector space. These embeddings have been used to obtain state-of-the-art (SOTA) performance on downstream tasks such as cell-type classification, disease-state prediction, and cross-species learning. Here, we ask whether similar performance can be achieved without utilizing computationally intensive deep learning-based representations. Using simple, interpretable pipelines that rely on careful normalization and linear methods, we obtain SOTA or near SOTA performance across multiple benchmarks commonly used to evaluate single-cell foundation models, including outperforming foundation models on out-of-distribution tasks involving novel cell types and organisms absent from the training data. Our findings highlight the need for rigorous benchmarking and suggest that the biology of cell identity can be captured by simple linear representations of single cell gene expression data.


Source: arXiv:2602.16696v1 - http://arxiv.org/abs/2602.16696v1 PDF: https://arxiv.org/pdf/2602.16696v1 Original Link: http://arxiv.org/abs/2602.16696v1

Submission:2/20/2026
Comments:0 comments
Subjects:Biology; Biotechnology
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Parameter-free representations outperform single-cell foundation models on downstream benchmarks | Researchia | Researchia