ExplorerData ScienceMachine Learning
Research PaperResearchia:202604.17060

A Comparative Study of Dynamic Programming and Reinforcement Learning in Finite Horizon Dynamic Pricing

Lev Razumovskiy

Abstract

This paper provides a systematic comparison between Fitted Dynamic Programming (DP), where demand is estimated from data, and Reinforcement Learning (RL) methods in finite-horizon dynamic pricing problems. We analyze their performance across environments of increasing structural complexity, ranging from a single typology benchmark to multi-typology settings with heterogeneous demand and inter-temporal revenue constraints. Unlike simplified comparisons that restrict DP to low-dimensional settings...

Submitted: April 17, 2026Subjects: Machine Learning; Data Science

Description / Details

This paper provides a systematic comparison between Fitted Dynamic Programming (DP), where demand is estimated from data, and Reinforcement Learning (RL) methods in finite-horizon dynamic pricing problems. We analyze their performance across environments of increasing structural complexity, ranging from a single typology benchmark to multi-typology settings with heterogeneous demand and inter-temporal revenue constraints. Unlike simplified comparisons that restrict DP to low-dimensional settings, we apply dynamic programming in richer, multi-dimensional environments with multiple product types and constraints. We evaluate revenue performance, stability, constraint satisfaction behavior, and computational scaling, highlighting the trade-offs between explicit expectation-based optimization and trajectory-based learning.


Source: arXiv:2604.14059v1 - http://arxiv.org/abs/2604.14059v1 PDF: https://arxiv.org/pdf/2604.14059v1 Original Link: http://arxiv.org/abs/2604.14059v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 17, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark