计算机科学
身份(音乐)
动漫
性格(数学)
风格(视觉艺术)
编码(集合论)
自然语言处理
人工智能
自然语言
集合(抽象数据类型)
程序设计语言
物理
几何学
数学
考古
声学
历史
作者
Chenshu Xu,Yangyang Xu,Huaidong Zhang,Xuemiao Xu,Shengfeng He
出处
期刊:IEEE Transactions on Visualization and Computer Graphics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-12
被引量:1
标识
DOI:10.1109/tvcg.2024.3397712
摘要
Text-to-image generation models have significantly broadened the horizons of creative expression through the power of natural language. However, navigating these models to generate unique concepts, alter their appearance, or reimagine them in unfamiliar roles presents an intricate challenge. For instance, how can we exploit language-guided models to transpose an anime character into a different art style, or envision a beloved character in a radically different setting or role? This paper unveils a novel approach named DreamAnime, designed to provide this level of creative freedom. Using a minimal set of 2-3 images of a user-specified concept such as an anime character or an art style, we teach our model to encapsulate its essence through novel "words" in the embedding space of a pre-existing text-to-image model. Crucially, we disentangle the concepts of style and identity into two separate "words", thus providing the ability to manipulate them independently. These distinct "words" can then be pieced together into natural language sentences, promoting an intuitive and personalized creative process. Empirical results suggest that this disentanglement into separate word embeddings successfully captures a broad range of unique and complex concepts, with each word focusing on style or identity as appropriate. Comparisons with existing methods illustrate DreamAnime's superior capacity to accurately interpret and recreate the desired concepts across various applications and tasks. Code is available at https://github.com/chnshx/DreamAnime .
科研通智能强力驱动
Strongly Powered by AbleSci AI