ExplorerComputer ScienceCybersecurity
Research PaperResearchia:202605.11013

GLiGuard: Schema-Conditioned Classification for LLM Safeguard

Urchade Zaratiana

Abstract

Ensuring safe, policy-compliant outputs from large language models requires real-time content moderation that can scale across multiple safety dimensions. However, state-of-the-art guardrail models rely on autoregressive decoders with 7B--27B parameters, reformulating what is fundamentally a classification problem as sequential text generation, a design choice that incurs high latency and scales poorly to multi-aspect evaluation. In this work, we introduce \textbf{GLiGuard}, a 0.3B-parameter sch...

Submitted: May 11, 2026Subjects: Cybersecurity; Computer Science

Description / Details

Ensuring safe, policy-compliant outputs from large language models requires real-time content moderation that can scale across multiple safety dimensions. However, state-of-the-art guardrail models rely on autoregressive decoders with 7B--27B parameters, reformulating what is fundamentally a classification problem as sequential text generation, a design choice that incurs high latency and scales poorly to multi-aspect evaluation. In this work, we introduce \textbf{GLiGuard}, a 0.3B-parameter schema-conditioned bidirectional encoder adapted from GLiNER2 for LLM content moderation. The key idea is to encode task definitions and label semantics directly into the input sequence as structured token schemas, enabling simultaneous evaluation of prompt safety, response safety, refusal detection, 14 fine-grained harm categories, and 11 jailbreak strategies in a single non-autoregressive forward pass. This schema-conditioned design lets supported task and label blocks be composed directly in the input schema at inference time. Across nine established safety benchmarks, GLiGuard achieves F1 scores competitive with 7B--27B decoder-based guards despite being 23--90×\times smaller, while delivering up to 16×\times higher throughput and 17×\times lower latency. These results suggest that compact bidirectional encoders can approach the accuracy of much larger guard models while drastically reducing inference cost. Code and models are available at https://github.com/fastino-ai/GLiGuard.


Source: arXiv:2605.07982v1 - http://arxiv.org/abs/2605.07982v1 PDF: https://arxiv.org/pdf/2605.07982v1 Original Link: http://arxiv.org/abs/2605.07982v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 11, 2026
Topic:
Computer Science
Area:
Cybersecurity
Comments:
0
Bookmark
GLiGuard: Schema-Conditioned Classification for LLM Safeguard | Researchia