ExplorerData ScienceMachine Learning
Research PaperResearchia:202603.03055

Chunk-wise Attention Transducers for Fast and Accurate Streaming Speech-to-Text

Hainan Xu

Abstract

We propose Chunk-wise Attention Transducer (CHAT), a novel extension to RNN-T models that processes audio in fixed-size chunks while employing cross-attention within each chunk. This hybrid approach maintains RNN-T's streaming capability while introducing controlled flexibility for local alignment modeling. CHAT significantly reduces the temporal dimension that RNN-T must handle, yielding substantial efficiency improvements: up to 46.2% reduction in peak training memory, up to 1.36X faster train...

Submitted: March 3, 2026Subjects: Machine Learning; Data Science

Description / Details

We propose Chunk-wise Attention Transducer (CHAT), a novel extension to RNN-T models that processes audio in fixed-size chunks while employing cross-attention within each chunk. This hybrid approach maintains RNN-T's streaming capability while introducing controlled flexibility for local alignment modeling. CHAT significantly reduces the temporal dimension that RNN-T must handle, yielding substantial efficiency improvements: up to 46.2% reduction in peak training memory, up to 1.36X faster training, and up to 1.69X faster inference. Alongside these efficiency gains, CHAT achieves consistent accuracy improvements over RNN-T across multiple languages and tasks -- up to 6.3% relative WER reduction for speech recognition and up to 18.0% BLEU improvement for speech translation. The method proves particularly effective for speech translation, where RNN-T's strict monotonic alignment hurts performance. Our results demonstrate that the CHAT model offers a practical solution for deploying more capable streaming speech models without sacrificing real-time constraints.


Source: arXiv:2602.24245v1 - http://arxiv.org/abs/2602.24245v1 PDF: https://arxiv.org/pdf/2602.24245v1 Original Link: http://arxiv.org/abs/2602.24245v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 3, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
Chunk-wise Attention Transducers for Fast and Accurate Streaming Speech-to-Text | Researchia