Poster
SpikeDiff: Zero-shot High-Quality Video Reconstruction from Sub-millisecond Chromatic Spike Streams
Siqi Yang · Jinxiu Liang · Zhaojun Huang · Yeliduosi Xiaokaiti · Yakun Chang · Zhaofei Yu · Boxin Shi
High-speed video reconstruction from neuromorphic spike cameras offers a promising alternative to traditional frame-based imaging, providing superior temporal resolution and dynamic range with reduced power consumption. Nevertheless, reconstructing high-quality colored videos from spikes captured in ultra-short time interval remains challenging due to the noisy nature of spikes. While some existing methods extend temporal capture window to improve reconstruction quality, they compromise the temporal resolution advantages of spike cameras. In this paper, we introduce SpikeDiff, the first zero-shot framework that leverages pretrained diffusion models to reconstruct high-quality colored videos from sub-millisecond chromatic spikes. By incorporating physics-based guidance into the diffusion sampling process, SpikeDiff bridges the domain gap between chromatic spikes and conventional images, enabling high-fidelity reconstruction without requiring domain-specific training data. Extensive experiments demonstrate that SpikeDiff achieves impressive reconstruction quality while maintaining ultra-high temporal resolution, outperforming existing methods across diverse challenging scenarios.
Live content is unavailable. Log in and register to view live content