ExplorerComputer SciencePeer Reviewed
Research PaperResearchia:202604.25010

Fully Autonomous AI Agents Should Not be Developed

Margaret Mitchell

Abstract

This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concernin...

Submitted: April 25, 2026Subjects: Peer Reviewed; Computer Science

Description / Details

This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.


Source: Semantic Scholar - arXiv.org (44 citations) PDF: N/A Original Link: https://www.semanticscholar.org/paper/b2bccc03f0476228e3fb9f2c0f3b2d4cebb82d25

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
Submission Info
Date:
Apr 25, 2026
Topic:
Computer Science
Area:
Peer Reviewed
Comments:
0
Bookmark