计算机科学
运动学
人工智能
任务(项目管理)
一般化
杠杆(统计)
手势
手势识别
机器人
卷积神经网络
运动(物理)
计算机视觉
机器学习
模式识别(心理学)
语音识别
工程类
数学
数学分析
物理
系统工程
经典力学
作者
Homa Alemzadeh,Ian Reyes,Zongyu Li,Homa Alemzadeh
出处
期刊:Cornell University - arXiv
日期:2023-06-28
标识
DOI:10.48550/arxiv.2306.16577
摘要
Fine-grained activity recognition enables explainable analysis of procedures for skill assessment, autonomy, and error detection in robot-assisted surgery. However, existing recognition models suffer from the limited availability of annotated datasets with both kinematic and video data and an inability to generalize to unseen subjects and tasks. Kinematic data from the surgical robot is particularly critical for safety monitoring and autonomy, as it is unaffected by common camera issues such as occlusions and lens contamination. We leverage an aggregated dataset of six dry-lab surgical tasks from a total of 28 subjects to train activity recognition models at the gesture and motion primitive (MP) levels and for separate robotic arms using only kinematic data. The models are evaluated using the LOUO (Leave-One-User-Out) and our proposed LOTO (Leave-One-Task-Out) cross validation methods to assess their ability to generalize to unseen users and tasks respectively. Gesture recognition models achieve higher accuracies and edit scores than MP recognition models. But, using MPs enables the training of models that can generalize better to unseen tasks. Also, higher MP recognition accuracy can be achieved by training separate models for the left and right robot arms. For task-generalization, MP recognition models perform best if trained on similar tasks and/or tasks from the same dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI