Back to Explorer
Research PaperResearchia:202601.28035[Image Processing > Engineering]

Scaling Next-Brain-Token Prediction for MEG

Richard Csaky

Abstract

We present a large autoregressive model for source-space MEG that scales next-token prediction to long context across datasets and scanners: handling a corpus of over 500 hours and thousands of sessions across the three largest MEG datasets. A modified SEANet-style vector-quantizer reduces multichannel MEG into a flattened token stream on which we train a Qwen2.5-VL backbone from scratch to predict the next brain token and to recursively generate minutes of MEG from up to a minute of context. To evaluate long-horizon generation, we introduce task-matched tests: (i) on-manifold stability via generated-only drift compared to the time-resolved distribution of real sliding windows, and (ii) conditional specificity via correct context versus prompt-swap controls using a neurophysiologically grounded metric set. We train on CamCAN and Omega and run all analyses on held-out MOUS, establishing cross-dataset generalization. Across metrics, generations remain relatively stable over long rollouts and are closer to the correct continuation than swapped controls. Code available at: https://github.com/ricsinaruto/brain-gen.


Source: arXiv:2601.20138v2 - http://arxiv.org/abs/2601.20138v2 PDF: https://arxiv.org/pdf/2601.20138v2 Original Link: http://arxiv.org/abs/2601.20138v2

Submission:1/28/2026
Comments:0 comments
Subjects:Engineering; Image Processing
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Scaling Next-Brain-Token Prediction for MEG | Researchia