计算机科学
维数之咒
体素
集合(抽象数据类型)
钥匙(锁)
人工智能
模式识别(心理学)
图像(数学)
体积热力学
构造(python库)
数据挖掘
计算机安全
量子力学
物理
程序设计语言
作者
Steve Kench,Samuel J. Cooper
标识
DOI:10.1038/s42256-021-00322-1
摘要
Generative adversarial networks (GANs) can be trained to generate three-dimensional (3D) image data, which are useful for design optimization. However, this conventionally requires 3D training data, which are challenging to obtain. Two-dimensional (2D) imaging techniques tend to be faster, higher resolution, better at phase identification and more widely available. Here we introduce a GAN architecture, SliceGAN, that is able to synthesize high-fidelity 3D datasets using a single representative 2D image. This is especially relevant for the task of material microstructure generation, as a cross-sectional micrograph can contain sufficient information to statistically reconstruct 3D samples. Our architecture implements the concept of uniform information density, which ensures both that generated volumes are equally high quality at all points in space and that arbitrarily large volumes can be generated. SliceGAN has been successfully trained on a diverse set of materials, demonstrating the widespread applicability of this tool. The quality of generated micrographs is shown through a statistical comparison of synthetic and real datasets of a battery electrode in terms of key microstructural metrics. Finally, we find that the generation time for a 108 voxel volume is on the order of a few seconds, yielding a path for future studies into high-throughput microstructural optimization. A generative approach called SliceGAN is demonstrated that can construct complex three-dimensional (3D) images from representative two-dimensional (2D) image examples. This is a promising approach in particular for studying microstructured materials where acquiring good-quality 3D data is challenging; 3D datasets can be created with SliceGAN, making use of high-quality 2D imaging techniques that are widely available.
科研通智能强力驱动
Strongly Powered by AbleSci AI