ExplorerData ScienceMachine Learning
Research PaperResearchia:202605.14004

What is Learnable in Valiant's Theory of the Learnable?

Steve Hanneke

Abstract

Valiant's 1984 paper is widely credited with introducing the PAC learning model, but it, in fact, introduced a different model: unlike PAC learning, the learner receives only positives, may issue membership queries, and must output a hypothesis with no false positives. Prior work characterized variants, including the case without queries. We revisit Valiant's original model and ask: Which classes are learnable in it? For every finite domain, including Valiant's Boolean-hypercube setting, we sh...

Submitted: May 14, 2026Subjects: Machine Learning; Data Science

Description / Details

Valiant's 1984 paper is widely credited with introducing the PAC learning model, but it, in fact, introduced a different model: unlike PAC learning, the learner receives only positives, may issue membership queries, and must output a hypothesis with no false positives. Prior work characterized variants, including the case without queries. We revisit Valiant's original model and ask: Which classes are learnable in it? For every finite domain, including Valiant's Boolean-hypercube setting, we show that a class is learnable if and only if every realizable positive sample can be certified by a poly-size adaptive query-compression scheme. This is a new variant of sample compression where the learner certifies samples via a short interaction with the membership oracle. Our characterization shows that learnability in Valiant's model is strictly sandwiched between learnability in the PAC model and the variant of Valiant's model without membership queries. This is one of the rare cases where introducing membership queries changes the set of learnable classes, and not just the sample or computational complexity. Next, we study the natural extension of the model to arbitrary domains. While we do not obtain an exact characterization, our techniques readily generalize and show that the same strict sandwiching persists. Finally, we show that dd-dimensional halfspaces, which are not learnable without queries, are learnable with queries: we give a poly(d)O~(1/ε)\mathrm{poly}(d) \tilde{O}(1/ε) sample and poly(d)polylog(1/ε)\mathrm{poly}(d) \mathrm{polylog}(1/ε) query algorithm, and prove that at least Ω(d)Ω(d) samples or queries are necessary. To our knowledge, this is the first algorithm for halfspaces in Valiant's model. Together, these results uncover a surprisingly rich theory behind Valiant's original notion of learnability and introduce ideas that may be of independent interest in learning theory.


Source: arXiv:2605.13840v1 - http://arxiv.org/abs/2605.13840v1 PDF: https://arxiv.org/pdf/2605.13840v1 Original Link: http://arxiv.org/abs/2605.13840v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 14, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
What is Learnable in Valiant's Theory of the Learnable? | Researchia