计算机科学
散列函数
渲染(计算机图形)
加速
人工神经网络
哈希表
库达
绘图
杠杆(统计)
内存带宽
并行计算
图形硬件
人工智能
计算机图形学(图像)
计算机安全
作者
Thomas Müller,Alex Evans,Christoph Schied,Alexander Keller
标识
DOI:10.1145/3528223.3530127
摘要
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.
科研通智能强力驱动
Strongly Powered by AbleSci AI