Strategic Algorithmic Monoculture:Experimental Evidence from Coordination Games
Abstract
AI agents increasingly operate in multi-agent environments where outcomes depend on coordination. We distinguish primary algorithmic monoculture -- baseline action similarity -- from strategic algorithmic monoculture, whereby agents adjust similarity in response to incentives. We implement a simple experimental design that cleanly separates these forces, and deploy it on human and large language model (LLM) subjects. LLMs exhibit high levels of baseline similarity (primary monoculture) and, like...
Description / Details
AI agents increasingly operate in multi-agent environments where outcomes depend on coordination. We distinguish primary algorithmic monoculture -- baseline action similarity -- from strategic algorithmic monoculture, whereby agents adjust similarity in response to incentives. We implement a simple experimental design that cleanly separates these forces, and deploy it on human and large language model (LLM) subjects. LLMs exhibit high levels of baseline similarity (primary monoculture) and, like humans, they regulate it in response to coordination incentives (strategic monoculture). While LLMs coordinate extremely well on similar actions, they lag behind humans in sustaining heterogeneity when divergence is rewarded.
Source: arXiv:2604.09502v1 - http://arxiv.org/abs/2604.09502v1 PDF: https://arxiv.org/pdf/2604.09502v1 Original Link: http://arxiv.org/abs/2604.09502v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 14, 2026
Artificial Intelligence
AI
0