Back to Explorer
Research PaperResearchia:202603.13075[Data Science > Machine Learning]

Temporal Straightening for Latent Planning

Ying Wang

Abstract

Learning good representations is essential for latent planning with world models. While pretrained visual encoders produce strong semantic visual features, they are not tailored to planning and contain information irrelevant -- or even detrimental -- to planning. Inspired by the perceptual straightening hypothesis in human visual processing, we introduce temporal straightening to improve representation learning for latent planning. Using a curvature regularizer that encourages locally straightened latent trajectories, we jointly learn an encoder and a predictor. We show that reducing curvature this way makes the Euclidean distance in latent space a better proxy for the geodesic distance and improves the conditioning of the planning objective. We demonstrate empirically that temporal straightening makes gradient-based planning more stable and yields significantly higher success rates across a suite of goal-reaching tasks.


Source: arXiv:2603.12231v1 - http://arxiv.org/abs/2603.12231v1 PDF: https://arxiv.org/pdf/2603.12231v1 Original Link: http://arxiv.org/abs/2603.12231v1

Submission:3/13/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Temporal Straightening for Latent Planning | Researchia