Back to Explorer
Research PaperResearchia:202601.29073[Computational Linguistics > NLP]

Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems

Alexander Loth

Abstract

Generative AI and misinformation research has evolved since our 2024 survey. This paper presents an updated perspective, transitioning from literature review to practical countermeasures. We report on changes in the threat landscape, including improved AI-generated content through Large Language Models (LLMs) and multimodal systems. Central to this work are our practical contributions: JudgeGPT, a platform for evaluating human perception of AI-generated news, and RogueGPT, a controlled stimulus generation engine for research. Together, these tools form an experimental pipeline for studying how humans perceive and detect AI-generated misinformation. Our findings show that detection capabilities have improved, but the competition between generation and detection continues. We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI. This work contributes to research addressing the adverse impacts of AI on information quality.


Source: arXiv:2601.21963v1 - http://arxiv.org/abs/2601.21963v1 PDF: https://arxiv.org/pdf/2601.21963v1 Original Link: http://arxiv.org/abs/2601.21963v1

Submission:1/29/2026
Comments:0 comments
Subjects:NLP; Computational Linguistics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!