计算机科学
图像翻译
人工智能
杠杆(统计)
对抗制
翻译(生物学)
鉴定(生物学)
图像(数学)
背景(考古学)
生成对抗网络
深度学习
图像编辑
深层神经网络
模式识别(心理学)
计算机视觉
生物化学
生物
信使核糖核酸
基因
古生物学
化学
植物
作者
Subin Varghese,Vedhus Hoskere
标识
DOI:10.1016/j.aei.2023.101940
摘要
Condition assessment of civil infrastructure from manual inspections can be time consuming, subjective, and unsafe. Advances in computer vision and Deep Neural Networks (DNNs) provide methods for automating important condition assessment tasks such as damage and context identification. One critical challenge towards the training of robust and generalizable DNNs for damage identification is the difficulty in obtaining large and diverse datasets. To maximally leverage available data, researchers have investigated using synthetic images of damaged structures from Generative Adversarial Networks (GANs) for data augmentation. However, GANs are limited in the diversity of data they can produce as they are only able to interpolate between samples of damaged structures in a dataset. Unpaired image-to-image translation using Cycle Consistent Adversarial Networks (CCAN) provide one means of extending the diversity and control in generated images, but have not been investigated for applications in condition assessment. We present EIGAN, a novel CCAN architecture for generating realistic synthetic images of a damaged structure, given an image of its undamaged state. EIGAN has the capability to translate undamaged images to damaged representations and vice-versa while retaining the geometric structure of the infrastructure (e.g, building shape, layout, color, size etc). We create a new unpaired dataset of damaged and undamaged building images taken after the 2017 Puebla Earthquake. Using this dataset, we demonstrate how EIGAN is able to address shortcomings of three other established CCAN architectures specifically for damage translation with both qualitative and quantitative measures. Additionally, we introduce a new methodology to explore the latent space of EIGAN allowing for some control over the properties of the generated damage (e.g., the damage severity). The results demonstrate that unpaired image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI