Back to Explorer
Research PaperResearchia:202604.05001[Computer Vision > Computer Vision]

EventHub: Data Factory for Generalizable Event-Based Stereo Networks without Active Sensors

Luca Bartolomei

Abstract

We propose EventHub, a novel framework for training deep-event stereo networks without ground truth annotations from costly active sensors, relying instead on standard color images. From these images, we derive either proxy annotations and proxy events through state-of-the-art novel view synthesis techniques, or simply proxy annotations when images are already paired with event data. Using the training set generated by our data factory, we repurpose state-of-the-art stereo models from RGB literature to process event data, obtaining new event stereo models with unprecedented generalization capabilities. Experiments on widely used event stereo datasets support the effectiveness of EventHub and show how the same data distillation mechanism can improve the accuracy of RGB stereo foundation models in challenging conditions such as nighttime scenes.


Source: arXiv:2604.02331v1 - http://arxiv.org/abs/2604.02331v1 PDF: https://arxiv.org/pdf/2604.02331v1 Original Link: http://arxiv.org/abs/2604.02331v1

Submission:4/5/2026
Comments:0 comments
Subjects:Computer Vision; Computer Vision
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

EventHub: Data Factory for Generalizable Event-Based Stereo Networks without Active Sensors | Researchia