ExplorerData ScienceMachine Learning
Research PaperResearchia:202604.22034

On two ways to use determinantal point processes for Monte Carlo integration

Guillaume Gautier

Abstract

The standard Monte Carlo estimator $\widehat{I}_N^{\mathrm{MC}}$ of $\int fdω$ relies on independent samples from $ω$ and has variance of order $1/N$. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to $f$ and $ω$. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of $\mathcal{O}(N^{-(1+1/d)})$ for smooth $f$, but relying on a ...

Submitted: April 22, 2026Subjects: Machine Learning; Data Science

Description / Details

The standard Monte Carlo estimator I^NMC\widehat{I}_N^{\mathrm{MC}} of fdω\int fdω relies on independent samples from ωω and has variance of order 1/N1/N. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to ff and ωω. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of O(N(1+1/d))\mathcal{O}(N^{-(1+1/d)}) for smooth ff, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order 1/N1/N, like Monte Carlo, but its DPP is tailored to ff. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.


Source: arXiv:2604.19698v1 - http://arxiv.org/abs/2604.19698v1 PDF: https://arxiv.org/pdf/2604.19698v1 Original Link: http://arxiv.org/abs/2604.19698v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 22, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
On two ways to use determinantal point processes for Monte Carlo integration | Researchia