Stochastic Scaling Limits and Synchronization by Noise in Deep Transformer Models
Abstract
We prove pathwise convergence of the layerwise evolution of tokens in a finite-depth, finite-width transformer model with MultiLayer Perceptron (MLP) blocks to a continuous-time stochastic interacting particle system. We also identify the stochastic partial differential equation describing the evolution of the tokens' distribution in this limit and prove propagation of chaos when the number of such tokens is large. The bounds we establish are quantitative and the limits we consider commute. We f...
Description / Details
We prove pathwise convergence of the layerwise evolution of tokens in a finite-depth, finite-width transformer model with MultiLayer Perceptron (MLP) blocks to a continuous-time stochastic interacting particle system. We also identify the stochastic partial differential equation describing the evolution of the tokens' distribution in this limit and prove propagation of chaos when the number of such tokens is large. The bounds we establish are quantitative and the limits we consider commute. We further prove that the limiting stochastic model displays synchronization by noise and establish exponential dissipation of the interaction energy on average, provided that the common noise is sufficiently coercive relative to the deterministic self-attention drift. We finally characterize the activation functions satisfying the former condition.
Source: arXiv:2604.26898v1 - http://arxiv.org/abs/2604.26898v1 PDF: https://arxiv.org/pdf/2604.26898v1 Original Link: http://arxiv.org/abs/2604.26898v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 30, 2026
Data Science
Machine Learning
0