✏️ Edit One for All: Interactive Batch Image Editing
CVPR 2024
Thao Nguyen
Utkarsh Ojha
Yuheng Li
Haotian Liu
Yong Jae Lee
University of Wisconsin-Madison 🦡
[arXiv 📝]
[code ⚙️]
[poster 🖼️]

Given an edit specified by users in an example image (e.g., dog pose),
Our method can automatically transfer that edit to other test images (e.g., all dog same pose).


Abstract

In recent years, image editing has advanced remarkably. With increased human control, it is now possible to edit an image in a plethora of ways; from specifying in text what we want to change, to straight up dragging the contents of the image in an interactive point-based manner. However, most of the focus has remained on editing single images at a time. Whether and how we can simultaneously edit large batches of images has remained understudied. With the goal of minimizing human supervision in the editing process, this paper presents a novel method for interactive batch image editing using StyleGAN as the medium. Given an edit specified by users in an example image (e.g., make the face frontal), our method can automatically transfer that edit to other test images, so that regardless of their initial state (pose), they all arrive at the same final state (e.g., all facing front). Extensive experiments demonstrate that edits performed using our method have similar visual quality to existing single-image-editing methods, while having more visual consistency and saving significant time and human effort.

Single Image Editing vs. Batch Image Editing. (a) Prior work focuses on single image editing.
(b) We focus on batch image editing, where the user’s edit on a single image is automatically transferred to new images, so that they all arrive at the same final state regardless of their initial starting state.

Interactive Batch Image Editing

As users adjust the editing strength in the example image (top row), all test images will be automatically updated. (Red bounding boxes indicate the edit according to the drag points) -- Please refer to main paper for better resolution!


Framework

a) Setting. (b) Naive Approach: The editing direction effective for an example may not generalize well to test images. (c) Optimizing Editing Direction: We optimize for a globally consistent direction that is effective for both example and test images. (d) Adjusting Editing Strength: Ensuring consistent final states requires adjusting the editing strength for each test image.

Multiple Edits

Multiple edits can be applied to example image before being transferred to test images.

Limitations

(a) Failure Case: Our method may encounter challenges in capturing fine details (e.g., curling trunk of an elephant). (b) Example-Test Similarity: For optimal results, the example and test images should belong to the same semantic domain (e.g., both featuring long hair) to ensure correctly transferred edits. (c-d) Interesting Cases: Edits can be mistakenly interpreted, resulting in unexpected outcomes such as winking in the wrong eye (c) or unintentionally flipping the horse (d).

BibTeX:
@inproceedings{nguyen2024edit,
      title={Edit One for All: Interactive Batch Image Editing},
      author={Thao Nguyen and Utkarsh Ojha and Yuheng Li and Haotian Liu and Yong Jae Lee},
      year={2024},
      eprint={2401.10219},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
   }

Acknowledgements

Website template was borrowed from Colorful Image Colorization and Nerfies; the code can be found here and here. Thank you (.❛ ᴗ ❛.).
▶ thaoshibe.github.io's clustrmaps 🌎.