On two ways to use determinantal point processes for Monte Carlo integration
Abstract
The standard Monte Carlo estimator $\widehat{I}_N^{\mathrm{MC}}$ of $\int fdω$ relies on independent samples from $ω$ and has variance of order $1/N$. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to $f$ and $ω$. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of $\mathcal{O}(N^{-(1+1/d)})$ for smooth $f$, but relying on a ...
Description / Details
The standard Monte Carlo estimator of relies on independent samples from and has variance of order . Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to and . We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of for smooth , but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order , like Monte Carlo, but its DPP is tailored to . We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.
Source: arXiv:2604.19698v1 - http://arxiv.org/abs/2604.19698v1 PDF: https://arxiv.org/pdf/2604.19698v1 Original Link: http://arxiv.org/abs/2604.19698v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 22, 2026
Data Science
Machine Learning
0