弹道
控制理论(社会学)
计算机科学
最大化
奇点
强化学习
数学优化
反演(地质)
转化(遗传学)
反向
控制(管理)
数学
人工智能
生物化学
生物
基因
构造盆地
物理
几何学
数学分析
古生物学
化学
天文
作者
Yang Hao-qiang,Xinliang Li,Deshan Meng,Xueqian Wang,Bin Liang
出处
期刊:Industrial Robot-an International Journal
[Emerald (MCB UP)]
日期:2023-07-05
卷期号:50 (5): 830-840
标识
DOI:10.1108/ir-01-2023-0002
摘要
Purpose The purpose of this paper is using a model-free reinforcement learning (RL) algorithm to optimize manipulability which can overcome difficulties of dilemmas of matrix inversion, complicated formula transformation and expensive calculation time. Design/methodology/approach Manipulability optimization is an effective way to solve the singularity problem arising in manipulator control. Some control schemes are proposed to optimize the manipulability during trajectory tracking, but they involve the dilemmas of matrix inversion, complicated formula transformation and expensive calculation time. Findings The redundant manipulator trained by RL can adjust its configuration in real-time to optimize the manipulability in an inverse-free manner while tracking the desired trajectory. Computer simulations and physics experiments demonstrate that compared with the existing methods, the average manipulability is increased by 58.9%, and the calculation time is reduced to 17.9%. Therefore, the proposed method effectively optimizes the manipulability, and the calculation time is significantly shortened. Originality/value To the best of the authors’ knowledge, this is the first method to optimize manipulability using RL during trajectory tracking. The authors compare their approach to existing singularity avoidance and manipulability maximization techniques, and prove that their method has better optimization effects and less computing time.
科研通智能强力驱动
Strongly Powered by AbleSci AI