计算机科学
油藏计算
随机性
超参数
人工神经网络
非线性系统
简单(哲学)
循环神经网络
前馈神经网络
人工智能
机器学习
前馈
算法
数学
哲学
统计
物理
认识论
量子力学
控制工程
工程类
作者
Haochun Ma,Davide Prosperino,Christoph Räth
标识
DOI:10.1038/s41598-023-39886-w
摘要
Abstract Reservoir computers are powerful machine learning algorithms for predicting nonlinear systems. Unlike traditional feedforward neural networks, they work on small training data sets, operate with linear optimization, and therefore require minimal computational resources. However, the traditional reservoir computer uses random matrices to define the underlying recurrent neural network and has a large number of hyperparameters that need to be optimized. Recent approaches show that randomness can be taken out by running regressions on a large library of linear and nonlinear combinations constructed from the input data and their time lags and polynomials thereof. However, for high-dimensional and nonlinear data, the number of these combinations explodes. Here, we show that a few simple changes to the traditional reservoir computer architecture further minimizing computational resources lead to significant and robust improvements in short- and long-term predictive performances compared to similar models while requiring minimal sizes of training data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI