反向传播
人工神经网络
梯度下降
计算机科学
扩展卡尔曼滤波器
卡尔曼滤波器
前馈神经网络
前馈
控制理论(社会学)
滤波器(信号处理)
计算
混乱的
集合卡尔曼滤波器
人工智能
算法
机器学习
控制工程
工程类
计算机视觉
控制(管理)
作者
Naseem Alsadi,Waleed Hilal,Onur Surucu,Alessandro Giuliano,Andrew Gadsden,John Yawney,Mohammad Al‐Shabi
摘要
Artificial feedforward neural networks (ANN) have been traditionally trained by backpropagation algorithms involving gradient descent algorithms. This is in order to optimize the network's weights and parameters in the training phase to minimize the out of sample error in the output during testing. However, gradient descent (GD) has been proven to be slow and computationally inefficient in comparison with studies implementing the extended Kalman filter (EKF) and unscented Kalman filter (UKF) as optimizers in ANNs. In this paper, a new method of training ANNs is proposed utilizing the sliding innovation filter (SIF). The SIF by Gadsden et al. has demonstrated to be a more robust predictor-corrector than the Kalman filters, especially in ill-conditioned situations or the presence of modelling uncertainties. In this paper, we propose implementing the SIF as an optimizer for training ANNs. The ANN proposed is trained with the SIF to predict the Mackey-Glass Chaotic series, and results demonstrate that the proposed method results in improved computation time compared to current estimation strategies for training ANNs while achieving results comparable to a UKF-trained neural network.
科研通智能强力驱动
Strongly Powered by AbleSci AI