Back to Explorer
Research PaperResearchia:202603.10030[Mathematics > Mathematics]

Generative Adversarial Regression (GAR): Learning Conditional Risk Scenarios

Saeed Asadi

Abstract

We propose Generative Adversarial Regression (GAR), a framework for learning conditional risk scenarios through generators aligned with downstream risk objectives. GAR builds on a regression characterization of conditional risk for elicitable functionals, including quantiles, expectiles, and jointly elicitable pairs. We extend this principle from point prediction to generative modeling by training generators whose policy-induced risk matches that of real data under the same context. To ensure robustness across all policies, GAR adopts a minimax formulation in which an adversarial policy identifies worst-case discrepancies in risk evaluation while the generator adapts to eliminate them. This structure preserves alignment with the risk functional across a broad class of policies rather than a fixed, pre-specified set. We illustrate GAR through a tail-risk instantiation based on jointly elicitable (VaR,ES)(\mathrm{VaR}, \mathrm{ES}) objectives. Experiments on S&P 500 data show that GAR produces scenarios that better preserve downstream risk than unconditional, econometric, and direct predictive baselines while remaining stable under adversarially selected policies.


Source: arXiv:2603.08553v1 - http://arxiv.org/abs/2603.08553v1 PDF: https://arxiv.org/pdf/2603.08553v1 Original Link: http://arxiv.org/abs/2603.08553v1

Submission:3/10/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!