非线性系统
循环神经网络
上下界
人工神经网络
应用数学
计算机科学
基质(化学分析)
数学
控制理论(社会学)
李雅普诺夫函数
数学优化
趋同(经济学)
数学分析
人工智能
控制(管理)
物理
量子力学
复合材料
经济
材料科学
经济增长
作者
Lin Xiao,Bolin Liao,Shuai Li,Ke Chen
标识
DOI:10.1016/j.neunet.2017.11.011
摘要
Abstract In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI