Skip to yearly menu bar Skip to main content


Poster

BVINet: Unlocking Blind Video Inpainting with Zero Annotations

zhiliang wu · Kerui Chen · Kun Li · Hehe Fan · Yi Yang


Abstract:

Video inpainting aims to fill in corrupted regions of the video with plausible contents. Existing methods generally assume that the locations of corrupted regions are known, focusing primarily on the “how to inpaint”. This reliance necessitates manual annotation of the corrupted regions using binary masks to indicate “where to inpaint”. However, the annotation of these masks is labor-intensive and expensive, limiting the practicality of current methods. In this paper, we expect to relax this assumption by defining a new blind video inpainting setting, enabling the networks to learn the mapping from corrupted video to inpainted result directly, eliminating the need of corrupted region annotations. Specifically, we propose an end-to-end blind video inpainting network (BVINet) to address both “where to inpaint” and “how to inpaint” simultaneously. On the one hand, BVINet can predict the masks of corrupted regions by detecting semantic-discontinuous regions of the frame and utilizing temporal consistency prior of the video. On the other hand, the predicted masks are incorporated into the BVINet, allowing it to capture valid context information from uncorrupted regions to fill in corrupted ones. Besides, we introduce a consistency loss to regularize the training parameters of BVINet. In this way, mask prediction and video completion mutually constrain each other, thereby maximizing the overall performance of the trained model. Recognizing that existing datasets are unsuitable for the blind video inpainting task due to the presence of prior knowledge (e.g., corrupted contents and clear borders), we contribute a new dataset specifically designed for blind video inpainting. Extensive experimental results demonstrate the effectiveness and superiority of our method.

Live content is unavailable. Log in and register to view live content