ExplorerBiomedical EngineeringEngineering
Research PaperResearchia:202602.13088

Learning Perceptual Representations for Gaming NR-VQA with Multi-Task FR Signals

Yu-Chih Chen

Abstract

No-reference video quality assessment (NR-VQA) for gaming videos is challenging due to limited human-rated datasets and unique content characteristics including fast motion, stylized graphics, and compression artifacts. We present MTL-VQA, a multi-task learning framework that uses full-reference metrics as supervisory signals to learn perceptually meaningful features without human labels for pretraining. By jointly optimizing multiple full-reference (FR) objectives with adaptive task weighting, ...

Submitted: February 13, 2026Subjects: Engineering; Biomedical Engineering

Description / Details

No-reference video quality assessment (NR-VQA) for gaming videos is challenging due to limited human-rated datasets and unique content characteristics including fast motion, stylized graphics, and compression artifacts. We present MTL-VQA, a multi-task learning framework that uses full-reference metrics as supervisory signals to learn perceptually meaningful features without human labels for pretraining. By jointly optimizing multiple full-reference (FR) objectives with adaptive task weighting, our approach learns shared representations that transfer effectively to NR-VQA. Experiments on gaming video datasets show MTL-VQA achieves performance competitive with state-of-the-art NR-VQA methods across both MOS-supervised and label-efficient/self-supervised settings.


Source: arXiv:2602.11903v1 - http://arxiv.org/abs/2602.11903v1 PDF: https://arxiv.org/pdf/2602.11903v1 Original Link: http://arxiv.org/abs/2602.11903v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 13, 2026
Topic:
Biomedical Engineering
Area:
Engineering
Comments:
0
Bookmark