TIP: Text-Driven Image Processing with Semantic and Restoration Instructions



1 Google Research     2 HKUST    

TL;DR: Restore your degraded image via prompting target semantic and restoration instructions.

Abstract

Text-driven diffusion models have become increasingly popular for various image editing tasks, including inpainting, stylization, and object replacement. However, it still remains an open research problem to adopt this language-vision paradigm for more fine-level image processing tasks, such as denoising, super-resolution, deblurring, and compression artifact removal.

In this paper, we develop TIP, a Text-driven Image Processing framework that leverages natural language as a user-friendly interface to control the image restoration process. We consider the capacity of text information in two dimensions. First, we use content-related prompts to enhance the semantic alignment, effectively alleviating identity ambiguity in the restoration outcomes. Second, our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength, without the need for explicit task-specific design.

In addition, we introduce a novel fusion mechanism that augments the existing ControlNet architecture by learning to rescale the generative prior, thereby achieving better restoration fidelity. Our extensive experiments demonstrate the superior restoration performance of TIP compared to the state of the arts, alongside offering the flexibility of text-based control over the restoration effects.

Pipeline

In the training phase, we begin by synthesizing a degraded version $y$, of a clean image $x$. Our degradation synthesis pipeline also creates a restoration prompt ${c}_r$ , which contains numeric parameters that reflects the intensity of the degradation introduced. Then, we inject the synthetic restoration prompt into a ControlNet adaptor, which uses our proposed modulation fusion blocks ($\gamma$, $\beta$) to connect with the frozen backbone driven by the semantic prompt ${c}_s$.

During test time, the users can employ the TIP framework as either a blind restoration model with restoration prompt $\textit{``Remove all degradation''}$ and empty semantic prompt $\varnothing$, or manually adjust the restoration ${c}_r$ and semantic prompts ${c}_s$ to obtain what they ask for.

Restoration Prompting

Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.



Prompt space walking visualization for the restoration prompt. Our method can decouple the restoration direction and strength via only natural language prompting.

Semantic Prompting

Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.

Baseline Comparison

BibTeX

@article{qi2023tip,
        title={TIP: Text-Driven Image Processing with Semantic and Restoration Instructions}, 
        author={Chenyang Qi and Zhengzhong Tu and Keren Ye and Mauricio Delbracio and Peyman Milanfar and Qifeng Chen and Hossein Talebi},
        journal={arXiv:2312.11595},
        year={2023},
        
}
  

Acknowledgement

We are grateful to Kelvin Chan and David Salesin for their valuable feedback. We also extend our gratitude to Shlomi Fruchter, Kevin Murphy, Mohammad Babaeizadeh, and Han Zhang for their instrumental contributions in facilitating the initial implementation of the latent diffusion model.