ExplorerArtificial IntelligenceAI
Research PaperResearchia:202605.14055

Improving Reproducibility in Evaluation through Multi-Level Annotator Modeling

Deepak Pandita

Abstract

As generative AI models such as large language models (LLMs) become more pervasive, ensuring the safety, robustness, and overall trustworthiness of these systems is paramount. However, AI is currently facing a reproducibility crisis driven by unreliable evaluations and unrepeatable experimental results. While human raters are often used to assess models for utility and safety, they introduce divergent biases and subjective opinions into their annotations. Overcoming this variance is exceptionall...

Submitted: May 14, 2026Subjects: AI; Artificial Intelligence

Description / Details

As generative AI models such as large language models (LLMs) become more pervasive, ensuring the safety, robustness, and overall trustworthiness of these systems is paramount. However, AI is currently facing a reproducibility crisis driven by unreliable evaluations and unrepeatable experimental results. While human raters are often used to assess models for utility and safety, they introduce divergent biases and subjective opinions into their annotations. Overcoming this variance is exceptionally challenging because very little data exists to study how experimental repeatability actually improves as the annotator pool grows. Standard evaluation practices typically rely on a small number of annotations per item (often 3 to 5) and lack the persistent rater identifiers necessary to model individual variance across items. In this work, we introduce a multi-level bootstrapping approach to realistically model annotator behavior. Leveraging datasets with a large number of ratings and persistent rater identifiers, we analyze the tradeoffs between the number of items (NN) and the number of responses per item (KK) required to achieve statistical significance.


Source: arXiv:2605.13801v1 - http://arxiv.org/abs/2605.13801v1 PDF: https://arxiv.org/pdf/2605.13801v1 Original Link: http://arxiv.org/abs/2605.13801v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 14, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark