马尔科夫蒙特卡洛
计算机科学
可逆跳跃马尔可夫链蒙特卡罗
贝叶斯推理
变阶贝叶斯网络
动态贝叶斯网络
人工智能
人工神经网络
机器学习
贝叶斯概率
先验概率
推论
背景(考古学)
古生物学
生物
作者
Nhat-Minh Nguyen,Minh‐Ngoc Tran,Rohitash Chandra
出处
期刊:Neurocomputing
[Elsevier]
日期:2023-10-29
卷期号:564: 126960-126960
被引量:2
标识
DOI:10.1016/j.neucom.2023.126960
摘要
The challenge to automatically select the best among models of varying dimensions remains open, especially in the context of complex models, sparse data, and noisy data. Bayesian neural networks employ Markov chain Monte Carlo (MCMC) and variational inference methods for training (sampling) model parameters. However, the progress of MCMC methods in deep learning has been slow due to high computational requirements and uninformative priors of model parameters. Reversible jump MCMC allows sampling of model parameters of variable lengths; hence, it has the potential to train Bayesian neural networks effectively. In this paper, we implement reversible jump MCMC for training dynamic Bayesian neural networks that feature cascaded neural networks with dynamic hidden and input neurons. We apply the methodology to a wide range of regression and classification problems from the literature. The results show that our proposed framework provides an effective approach for the dynamic exploration of models while featuring uncertainty quantification that not only caters to model parameters but also extends to model topology. This opens up the road for uncertainty quantification in dynamic neural networks where hidden and input neurons can change over time.
科研通智能强力驱动
Strongly Powered by AbleSci AI