计算机科学
图像编辑
人工智能
翻译(生物学)
图像(数学)
面子(社会学概念)
编码器
集合(抽象数据类型)
正交性
图像翻译
任务(项目管理)
空格(标点符号)
自然语言处理
因式分解
模式识别(心理学)
计算机视觉
算法
数学
化学
几何学
管理
社会学
信使核糖核酸
经济
基因
程序设计语言
操作系统
社会科学
生物化学
作者
Yusuf Dalva,Hamza Pehlivan,Öykü Irmak Hatipoğlu,Cansu Moran,Aysegul Dundar
标识
DOI:10.1109/tpami.2023.3308102
摘要
We propose an image-to-image translation framework for facial attribute editing with disentangled interpretable latent directions. Facial attribute editing task faces the challenges of targeted attribute editing with controllable strength and disentanglement in the representations of attributes to preserve the other attributes during edits. For this goal, inspired by the latent space factorization works of fixed pretrained GANs, we design the attribute editing by latent space factorization, and for each attribute, we learn a linear direction that is orthogonal to the others. We train these directions with orthogonality constraints and disentanglement losses. To project images to semantically organized latent spaces, we set an encoder-decoder architecture with attention-based skip connections. We extensively compare with previous image translation algorithms and editing with pretrained GAN works. Our extensive experiments show that our method significantly improves over the state-of-the-arts.
科研通智能强力驱动
Strongly Powered by AbleSci AI