Text-driven diffusion models have become increasingly popular for various image editing tasks, including inpainting, stylization, and object replacement. However, it still remains an open research problem to adopt this language-vision paradigm for more fine-level image processing tasks, such as denoising, super-resolution, deblurring, and compression artifact removal. In this paper, we develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework that leverages natural language as a user-friendly interface to control the image restoration process. We consider the capacity of prompt information in two dimensions. First, we use content-related prompts to enhance the semantic alignment, effectively alleviating identity ambiguity in the restoration outcomes. Second, our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength, without the need for explicit task-specific design. In addition, we introduce a novel fusion mechanism that augments the existing ControlNet architecture by learning to rescale the generative prior, thereby achieving better restoration fidelity. Our extensive experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts, alongside offering the flexibility of text-based control over the restoration effects.
In the training phase, we begin by synthesizing a degraded version $y$, of a clean image $x$. Our degradation synthesis pipeline also creates a restoration prompt ${c}_r$ , which contains numeric parameters that reflects the intensity of the degradation introduced.
Then, we inject the synthetic restoration prompt into a ControlNet adaptor, which uses our proposed modulation fusion blocks ($\gamma$, $\beta$) to connect with the frozen backbone driven by the semantic prompt ${c}_s$.
During test time, the users can employ the SPIRE framework as either a blind restoration model with restoration prompt $\textit{``Remove all degradation''}$ and empty semantic prompt $\varnothing$, or manually adjust the restoration ${c}_r$ and semantic prompts ${c}_s$ to obtain what they ask for.
Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.
Prompt space walking visualization for the restoration prompt. Our method can decouple the restoration direction and strength via only natural language prompting.
Test-time semantic prompting. Our framework restores degraded images guided by flexible semantic prompts, while unrelated background elements and global tones remain aligned with the degraded input conditioning.
@article{qi2023tip,
title={TIP: Text-Driven Image Processing with Semantic and Restoration Instructions},
author={Chenyang Qi and Zhengzhong Tu and Keren Ye and Mauricio Delbracio and Peyman Milanfar and Qifeng Chen and Hossein Talebi},
journal={arXiv:2312.11595},
year={2023},
}