ExplorerComputational LinguisticsNLP
Research PaperResearchia:202603.10010

Speak in Context: Multilingual ASR with Speech Context Alignment via Contrastive Learning

Yuchen Zhang

Abstract

Automatic speech recognition (ASR) has benefited from advances in pretrained speech and language models, yet most systems remain constrained to monolingual settings and short, isolated utterances. While recent efforts in context-aware ASR show promise, two key challenges persist: limited multilingual support and the absence of principled alignment between speech and contextual representations. In this paper, we introduce a context-aware multilingual ASR framework that supports diverse languages ...

Submitted: March 10, 2026Subjects: NLP; Computational Linguistics

Description / Details

Automatic speech recognition (ASR) has benefited from advances in pretrained speech and language models, yet most systems remain constrained to monolingual settings and short, isolated utterances. While recent efforts in context-aware ASR show promise, two key challenges persist: limited multilingual support and the absence of principled alignment between speech and contextual representations. In this paper, we introduce a context-aware multilingual ASR framework that supports diverse languages and accents while preserving the modularity of pretrained models. Our approach combines a frozen speech encoder and a decoder-only language model via a lightweight projection module, allowing structured context prompts, including dialogue history and biasing words, to guide transcription. To improve interaction between speech and context, we employ a contrastive learning objective that aligns their representations in a shared embedding space. Evaluations on over 1,500 hours of real-world conversational speech across 11 languages and 5 English dialects show that contextual input consistently improves recognition quality. Contrastive alignment provides additional gains when applied to different context types, with an overall performance gain of over 5%. These results highlight the importance of both contextual modeling and cross-modal alignment in multilingual ASR.


Source: arXiv:2603.06505v1 - http://arxiv.org/abs/2603.06505v1 PDF: https://arxiv.org/pdf/2603.06505v1 Original Link: http://arxiv.org/abs/2603.06505v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 10, 2026
Topic:
Computational Linguistics
Area:
NLP
Comments:
0
Bookmark
Speak in Context: Multilingual ASR with Speech Context Alignment via Contrastive Learning | Researchia