ExplorerArtificial IntelligenceAI
Research PaperResearchia:202602.19047

This human study did not involve human subjects: Validating LLM simulations as behavioral evidence

Jessica Hullman

Abstract

A growing literature uses large language models (LLMs) as synthetic participants to generate cost-effective and nearly instantaneous responses in social science experiments. However, there is limited guidance on when such simulations support valid inference about human behavior. We contrast two strategies for obtaining valid estimates of causal effects and clarify the assumptions under which each is suitable for exploratory versus confirmatory research. Heuristic approaches seek to establish tha...

Submitted: February 19, 2026Subjects: AI; Artificial Intelligence

Description / Details

A growing literature uses large language models (LLMs) as synthetic participants to generate cost-effective and nearly instantaneous responses in social science experiments. However, there is limited guidance on when such simulations support valid inference about human behavior. We contrast two strategies for obtaining valid estimates of causal effects and clarify the assumptions under which each is suitable for exploratory versus confirmatory research. Heuristic approaches seek to establish that simulated and observed human behavior are interchangeable through prompt engineering, model fine-tuning, and other repair strategies designed to reduce LLM-induced inaccuracies. While useful for many exploratory tasks, heuristic approaches lack the formal statistical guarantees typically required for confirmatory research. In contrast, statistical calibration combines auxiliary human data with statistical adjustments to account for discrepancies between observed and simulated responses. Under explicit assumptions, statistical calibration preserves validity and provides more precise estimates of causal effects at lower cost than experiments that rely solely on human participants. Yet the potential of both approaches depends on how well LLMs approximate the relevant populations. We consider what opportunities are overlooked when researchers focus myopically on substituting LLMs for human participants in a study.


Source: arXiv:2602.15785v1 - http://arxiv.org/abs/2602.15785v1 PDF: https://arxiv.org/pdf/2602.15785v1 Original Link: http://arxiv.org/abs/2602.15785v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 19, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark
This human study did not involve human subjects: Validating LLM simulations as behavioral evidence | Researchia