Back to Explorer
Research PaperResearchia:202604.01057[Artificial Intelligence > AI]

Quantifying Cross-Modal Interactions in Multimodal Glioma Survival Prediction via InterSHAP: Evidence for Additive Signal Integration

Iain Swift

Abstract

Multimodal deep learning for cancer prognosis is commonly assumed to benefit from synergistic cross-modal interactions, yet this assumption has not been directly tested in survival prediction settings. This work adapts InterSHAP, a Shapley interaction index-based metric, from classification to Cox proportional hazards models and applies it to quantify cross-modal interactions in glioma survival prediction. Using TCGA-GBM and TCGA-LGG data (n=575), we evaluate four fusion architectures combining whole-slide image (WSI) and RNA-seq features. Our central finding is an inverse relationship between predictive performance and measured interaction: architectures achieving superior discrimination (C-index 0.64β†’\to0.82) exhibit equivalent or lower cross-modal interaction (4.8%β†’\to3.0%). Variance decomposition reveals stable additive contributions across all architectures (WSIβ‰ˆ{\approx}40%, RNAβ‰ˆ{\approx}55%, Interactionβ‰ˆ{\approx}4%), indicating that performance gains arise from complementary signal aggregation rather than learned synergy. These findings provide a practical model auditing tool for comparing fusion strategies, reframe the role of architectural complexity in multimodal fusion, and have implications for privacy-preserving federated deployment.


Source: arXiv:2603.29977v1 - http://arxiv.org/abs/2603.29977v1 PDF: https://arxiv.org/pdf/2603.29977v1 Original Link: http://arxiv.org/abs/2603.29977v1

Submission:4/1/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!