Prompt-to-Prompt Image Editing with Cross Attention Control
https://paperswithcode.com/paper/prompt-to-prompt-image-editing-with-cross
https://arxiv.org/abs/2208.01626
https://github.com/google/prompt-to-prompt
编辑扩散模型;开源9天,收获900星
Recent large-scale text-driven synthesis models have attracted much attention thanks to their remarkable capabilities of generating highly diverse images that follow given text prompts. Such text-based synthesis methods are particularly appealing to humans who are used to verbally describe their intent. Therefore, it is only natural to extend the text-driven image synthesis to text-driven image editing. Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome. State-of-the-art methods mitigate this by requiring the users to provide a spatial mask to localize the edit, hence, ignoring the original structure and content within the masked region. In this paper, we pursue an intuitive prompt-to-prompt editing framework, where the edits are controlled by text only. To this end, we analyze a text-conditioned model in depth and observe that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt. With this observation, we present several applications which monitor the image synthesis by editing the textual prompt only. This includes localized editing by replacing a word, global editing by adding a specification, and even delicately controlling the extent to which a word is reflected in the image. We present our results over diverse images and prompts, demonstrating high-quality synthesis and fidelity to the edited prompts.
最近的大规模文本驱动合成模型由于其生成高度多样化的图像并遵循给定的文本提示的显著能力而受到了广泛关注。这种基于文本的合成方法对习惯于口头描述其意图的人特别有吸引力。因此,将文本驱动图像合成扩展到文本驱动图像编辑是很自然的。对于这些生成性模型来说,编辑是一项挑战,因为编辑技术的一个固有属性是保留大部分原始图像,而在基于文本的模型中,即使对文本提示符进行一次小小的修改,也往往会导致完全不同的结果。最先进的方法通过要求用户提供空间遮罩以本地化编辑来缓解这种情况,从而忽略遮罩区域内的原始结构和内容。在本文中,我们寻求一种直观的提示编辑框架,其中编辑仅由文本控制。为此,我们深入分析了一个文本条件模型,并观察到交叉注意层是控制图像空间布局与提示中每个单词之间关系的关键。通过这一观察,我们提出了几个应用程序,它们仅通过编辑文本提示来监控图像合成。这包括通过替换单词进行本地化编辑,通过添加规范进行全局编辑,甚至精细地控制单词在图像中的反映程度。我们在不同的图像和提示上展示我们的结果,展示了高质量的合成和对编辑提示的逼真度。
网友评论