ExplorerArtificial IntelligenceAI
Research PaperResearchia:202604.21056

Benchmarking System Dynamics AI Assistants: Cloud Versus Local LLMs on CLD Extraction and Discussion

Terry Leitch

Abstract

We present a systematic evaluation of large language model families -- spanning both proprietary cloud APIs and locally-hosted open-source models -- on two purpose-built benchmarks for System Dynamics AI assistance: the \textbf{CLD Leaderboard} (53 tests, structured causal loop diagram extraction) and the \textbf{Discussion Leaderboard} (interactive model discussion, feedback explanation, and model building coaching). On CLD extraction, cloud models achieve 77--89\% overall pass rates; the bes...

Submitted: April 21, 2026Subjects: AI; Artificial Intelligence

Description / Details

We present a systematic evaluation of large language model families -- spanning both proprietary cloud APIs and locally-hosted open-source models -- on two purpose-built benchmarks for System Dynamics AI assistance: the \textbf{CLD Leaderboard} (53 tests, structured causal loop diagram extraction) and the \textbf{Discussion Leaderboard} (interactive model discussion, feedback explanation, and model building coaching). On CLD extraction, cloud models achieve 77--89% overall pass rates; the best local model reaches 77% (KimiK2.5GGUFQ3, zero-shot engine), matching mid-tier cloud performance. On Discussion, the best local models achieve 50--100% on model building steps and 47--75% on feedback explanation, but only 0--50% on error fixing -- a category dominated by long-context prompts that expose memory limits in local deployments. A central contribution of this paper is a systematic analysis of \textit{model type effects} on performance: we compare reasoning vs.\ instruction-tuned architectures, GGUF (llama.cpp) vs.\ MLX (mlx_lm) backends, and quantization levels (Q3 / Q4_K_M / MLX-3bit / MLX-4bit / MLX-6bit) across the same underlying model families. We find that backend choice has larger practical impact than quantization level: mlx_lm does not enforce JSON schema constraints, requiring explicit prompt-level JSON instructions, while llama.cpp grammar-constrained sampling handles JSON reliably but causes indefinite generation on long-context prompts for dense models. We document the full parameter sweep (tt, pp, kk) for all local models, cleaned timing data (stuck requests excluded), and a practitioner guide for running 671B--123B parameter models on AppleSilicon.


Source: arXiv:2604.18566v1 - http://arxiv.org/abs/2604.18566v1 PDF: https://arxiv.org/pdf/2604.18566v1 Original Link: http://arxiv.org/abs/2604.18566v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 21, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark