Skip to yearly menu bar Skip to main content


Poster

Training-free Geometric Image Editing on Diffusion Models

Hanshen Zhu · Zhen Zhu · Kaile Zhang · Yiming Gong · Yuliang Liu · Xiang Bai


Abstract:

We tackle the problem of geometric image editing, where an object within an image is repositioned, reoriented, or reshaped while preserving overall scene coherence. Previous diffusion-based editing methods often attempt to handle all relevant subtasks in a single step, which proves difficult when transformations become large or structurally complex. We address this by proposing a decoupled pipeline that separates object transformation, source region inpainting, and target region refinement. Both inpainting and refinement are implemented using a training-free diffusion approach, FreeFine. In experiments on our new GeoBench benchmark, which contains both 2D and 3D editing scenarios, FreeFine outperforms state-of-the-art alternatives in image fidelity and edit precision, especially under demanding transformations. We will release our codes and benchmark when the paper becomes publicly available.

Live content is unavailable. Log in and register to view live content