Fully Autonomous AI Agents Should Not be Developed
Abstract
This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concernin...
Description / Details
This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.
Source: Semantic Scholar - arXiv.org (44 citations) PDF: N/A Original Link: https://www.semanticscholar.org/paper/b2bccc03f0476228e3fb9f2c0f3b2d4cebb82d25
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 25, 2026
Computer Science
Peer Reviewed
0