Back to Explorer
Research PaperResearchia:202603.31016[Data Science > Machine Learning]

Temporal Credit Is Free

Aur Shalev Merin

Abstract

Recurrent networks do not need Jacobian propagation to adapt online. The hidden state already carries temporal credit through the forward pass; immediate derivatives suffice if you stop corrupting them with stale trace memory and normalize gradient scales across parameter groups. An architectural rule predicts when normalization is needed: \b{eta}2 is required when gradients must pass through a nonlinear state update with no output bypass, and unnecessary otherwise. Across ten architectures, real primate neural data, and streaming ML benchmarks, immediate derivatives with RMSprop match or exceed full RTRL, scaling to n = 1024 at 1000x less memory.


Source: arXiv:2603.28750v1 - http://arxiv.org/abs/2603.28750v1 PDF: https://arxiv.org/pdf/2603.28750v1 Original Link: http://arxiv.org/abs/2603.28750v1

Submission:3/31/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Temporal Credit Is Free | Researchia