人工神经网络
计算机科学
失败
构造(python库)
深层神经网络
计算复杂性理论
堆积
常微分方程
培训(气象学)
人工智能
组分(热力学)
算法
微分方程
数学
并行计算
计算机网络
数学分析
气象学
物理
热力学
核磁共振
作者
Zhengbo Luo,Sei‐ichiro Kamata,Zitang Sun,Weilian Zhou
标识
DOI:10.1109/icassp39728.2021.9413916
摘要
Most structures of deep neural networks (DNN) are with a fixed complexity of both computational cost (parameters and FLOPs) and the expressiveness. In this work, we experimentally investigate the effectiveness of using neural ordinary differential equations (NODEs) as a component to provide further depth to relatively shallower networks rather than stacked layers (depth) which achieved improvement with fewer parameters. Moreover, we construct deep neural networks with flexible complexity based on NODEs which enables the system to adjust its complexity while training. The proposed method achieved more parameter-efficient performance than stacking standard DNNs, and it alleviates the defect of the heavy cost required by NODEs.
科研通智能强力驱动
Strongly Powered by AbleSci AI