Back to Explorer
Research PaperResearchia:202512.24f74365[Neuroscience > Neuroscience]

Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence

Sean C. Borneman

Abstract

Human language processing relies on the brain's capacity for predictive inference. We present a machine learning framework for decoding neural (EEG) responses to dynamic visual language stimuli in Deaf signers. Using coherence between neural signals and optical flow-derived motion features, we construct spatiotemporal representations of predictive neural dynamics. Through entropy-based feature selection, we identify frequency-specific neural signatures that differentiate interpretable linguistic input from linguistically disrupted (time-reversed) stimuli. Our results reveal distributed left-hemispheric and frontal low-frequency coherence as key features in language comprehension, with experience-dependent neural signatures correlating with age. This work demonstrates a novel multimodal approach for probing experience-driven generative models of perception in the brain.

Submission:12/24/2025
Comments:0 comments
Subjects:Neuroscience; Neuroscience
Original Source:
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence | Researchia