ExplorerData ScienceMachine Learning
Research PaperResearchia:202604.29069

Barriers to Universal Reasoning With Transformers (And How to Overcome Them)

Oliver Kraus

Abstract

Chain-of-Thought (CoT) has been shown to empirically improve Transformers' performance, and theoretically increase their expressivity to Turing completeness. However, whether Transformers can learn to generalize to CoT traces longer than those seen during training is understudied. We use recent theoretical frameworks for Transformer length generalization and find that -- under standard positional encodings and a finite alphabet -- Transformers with CoT cannot solve problems beyond $TC^0$, i.e. t...

Submitted: April 29, 2026Subjects: Machine Learning; Data Science

Description / Details

Chain-of-Thought (CoT) has been shown to empirically improve Transformers' performance, and theoretically increase their expressivity to Turing completeness. However, whether Transformers can learn to generalize to CoT traces longer than those seen during training is understudied. We use recent theoretical frameworks for Transformer length generalization and find that -- under standard positional encodings and a finite alphabet -- Transformers with CoT cannot solve problems beyond TC0TC^0, i.e. the expressivity benefits do not hold under the stricter requirement of length-generalizable learnability. However, if we allow the vocabulary to grow with problem size, we attain a length-generalizable simulation of Turing machines where the CoT trace length is linear in the simulated runtime up to a constant. Our construction overcomes two core obstacles to reliable length generalization: repeated copying and last-occurrence retrieval. We assign each tape position a unique signpost token, and log only value changes to enable recovery of the current tape symbol through counts circumventing both barriers. Further, we empirically show that the use of such signpost tokens and value change encodings provide actionable guidance to improve length generalization on hard problems.


Source: arXiv:2604.25800v1 - http://arxiv.org/abs/2604.25800v1 PDF: https://arxiv.org/pdf/2604.25800v1 Original Link: http://arxiv.org/abs/2604.25800v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 29, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
Barriers to Universal Reasoning With Transformers (And How to Overcome Them) | Researchia