核(代数)
卷积神经网络
像素
卷积码
卷积(计算机科学)
计算机科学
稳健性(进化)
块(置换群论)
人工智能
模式识别(心理学)
算法
数学
离散数学
人工神经网络
解码方法
组合数学
生物化学
基因
化学
作者
Tianyu Ma,Alan Q. Wang,Adrian V. Dalca,Mert R. Sabuncu
标识
DOI:10.1016/j.media.2023.102796
摘要
The convolutional neural network (CNN) is one of the most commonly used architectures for computer vision tasks. The key building block of a CNN is the convolutional kernel that aggregates information from the pixel neighborhood and shares weights across all pixels. A standard CNN's capacity, and thus its performance, is directly related to the number of learnable kernel weights, which is determined by the number of channels and the kernel size (support). In this paper, we present the hyper-convolution, a novel building block that implicitly encodes the convolutional kernel using spatial coordinates. Unlike a regular convolutional kernel, whose weights are independently learned, hyper-convolution kernel weights are correlated through an encoder that maps spatial coordinates to their corresponding values. Hyper-convolutions decouple kernel size from the total number of learnable parameters, enabling a more flexible architecture design. We demonstrate in our experiments that replacing regular convolutions with hyper-convolutions can improve performance with less parameters, and increase robustness against noise. We provide our code here: https://github.com/tym002/Hyper-Convolution.
科研通智能强力驱动
Strongly Powered by AbleSci AI