Yang Hao-qiang,Xinliang Li,Deshan Meng,Xueqian Wang,Bin Liang
出处
期刊:Industrial Robot-an International Journal [Emerald (MCB UP)] 日期:2023-07-05卷期号:50 (5): 830-840
标识
DOI:10.1108/ir-01-2023-0002
摘要
Purpose The purpose of this paper is using a model-free reinforcement learning (RL) algorithm to optimize manipulability which can overcome difficulties of dilemmas of matrix inversion, complicated formula transformation and expensive calculation time. Design/methodology/approach Manipulability optimization is an effective way to solve the singularity problem arising in manipulator control. Some control schemes are proposed to optimize the manipulability during trajectory tracking, but they involve the dilemmas of matrix inversion, complicated formula transformation and expensive calculation time. Findings The redundant manipulator trained by RL can adjust its configuration in real-time to optimize the manipulability in an inverse-free manner while tracking the desired trajectory. Computer simulations and physics experiments demonstrate that compared with the existing methods, the average manipulability is increased by 58.9%, and the calculation time is reduced to 17.9%. Therefore, the proposed method effectively optimizes the manipulability, and the calculation time is significantly shortened. Originality/value To the best of the authors’ knowledge, this is the first method to optimize manipulability using RL during trajectory tracking. The authors compare their approach to existing singularity avoidance and manipulability maximization techniques, and prove that their method has better optimization effects and less computing time.