人工智能
计算机科学
计算机视觉
特征(语言学)
规范化(社会学)
分割
姿势
块(置换群论)
失真(音乐)
光学(聚焦)
模式识别(心理学)
数学
放大器
计算机网络
哲学
语言学
几何学
带宽(计算)
社会学
人类学
物理
光学
作者
Feng Yu,Ailing Hua,Chenghu Du,Minghua Jiang,Wei Xiong,Tao Peng,Lijun Xu,Xinrong Hu
出处
期刊:IEEE Transactions on Consumer Electronics
[Institute of Electrical and Electronics Engineers]
日期:2023-08-17
卷期号:69 (4): 1101-1113
被引量:15
标识
DOI:10.1109/tce.2023.3306206
摘要
Multi-pose virtual try-on has become a research focus for online clothes shopping due to the fixed-pose virtual try-on methods that cannot provide a different pose try-on effect. The challenge of multi-pose virtual try-on is that the detailed information of a generated image is difficult to obtain in the pose transformation and garment distortion. To solve the issue, we propose a multi-pose virtual try-on method via appearance flow and feature filtering (VTON-MP). First, a segmentation generation network of 2D keypoints about the target pose is used to predict the body semantic distribution of the target pose. Second, the desired garment is distorted to correspond to the body posture using the appearance flow figure alignment network (AFFAN). Third, latent useless feature weights are restrained using a filtering-enhancement block (FEB), and effective appearance feature weights are enhanced. Finally, the spatial relationship of body parts in the resulting image is further optimized using spatially-adaptive instance normalization (SAIN). Compared to state-of-the-art methods of subjective and objective experiments on the MPV dataset, the proposed VTONMP achieves the best performance in terms of SSIM, PSNR and FID. The experimental results demonstrate that the proposed algorithm can better retain image details (head, hands, arms, and trousers).
科研通智能强力驱动
Strongly Powered by AbleSci AI