Back to Explorer
Research PaperResearchia:202603.11053[Bio-AI Interfaces > Neuroscience]

Aligning What EEG Can See: Structural Representations for Brain-Vision Matching

Jingyi Tang

Abstract

Visual decoding from electroencephalography (EEG) has emerged as a highly promising avenue for non-invasive brain-computer interfaces (BCIs). Existing EEG-based decoding methods predominantly align brain signals with the final-layer semantic embeddings of deep visual models. However, relying on these highly abstracted embeddings inevitably leads to severe cross-modal information mismatch. In this work, we introduce the concept of Neural Visibility and accordingly propose the EEG-Visible Layer Selection Strategy, aligning EEG signals with intermediate visual layers to minimize this mismatch. Furthermore, to accommodate the multi-stage nature of human visual processing, we propose a novel Hierarchically Complementary Fusion (HCF) framework that jointly integrates visual representations from different hierarchical levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance, reaching an 84.6% accuracy (+21.4%) on zero-shot visual decoding on the THINGS-EEG dataset. Moreover, our method achieves up to a 129.8% performance gain across diverse EEG baselines, demonstrating its robust generalizability.


Source: arXiv:2603.07077v1 - http://arxiv.org/abs/2603.07077v1 PDF: https://arxiv.org/pdf/2603.07077v1 Original Link: http://arxiv.org/abs/2603.07077v1

Submission:3/11/2026
Comments:0 comments
Subjects:Neuroscience; Bio-AI Interfaces
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Aligning What EEG Can See: Structural Representations for Brain-Vision Matching | Researchia