Back to Explorer
Research PaperResearchia:202604.05009[Biotechnology > Biology]

annbatch unlocks terabyte-scale training of biological data in anndata

Ilan Gold

Abstract

The scale of biological datasets now routinely exceeds system memory, making data access rather than model computation the primary bottleneck in training machine-learning models. This bottleneck is particularly acute in biology, where widely used community data formats must support heterogeneous metadata, sparse and dense assays, and downstream analysis within established computational ecosystems. Here we present annbatch, a mini-batch loader native to anndata that enables out-of-core training directly on disk-backed datasets. Across single-cell transcriptomics, microscopy and whole-genome sequencing benchmarks, annbatch increases loading throughput by up to an order of magnitude and shortens training from days to hours, while remaining fully compatible with the scverse ecosystem. Annbatch establishes a practical data-loading infrastructure for scalable biological AI, allowing increasingly large and diverse datasets to be used without abandoning standard biological data formats. Github: https://github.com/scverse/annbatch


Source: arXiv:2604.01949v1 - http://arxiv.org/abs/2604.01949v1 PDF: https://arxiv.org/pdf/2604.01949v1 Original Link: http://arxiv.org/abs/2604.01949v1

Submission:4/5/2026
Comments:0 comments
Subjects:Biology; Biotechnology
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

annbatch unlocks terabyte-scale training of biological data in anndata | Researchia