人机交互
可用性
接口(物质)
计算机科学
任务(项目管理)
机器人
抓住
人工智能
心理学
应用心理学
模拟
物理医学与康复
工程类
医学
最大气泡压力法
气泡
并行计算
程序设计语言
系统工程
作者
John R. Schultz,Andrew B. Slifkin,Hongkai Yu,Eric M. Schearer
标识
DOI:10.1109/icorr55369.2022.9896535
摘要
Eating and drinking is an essential part of every-day life. And yet, there are many people in the world today who rely on others to feed them. In this work, we present a prototype robot-assisted self-feeding system for individuals with movement disorders. The system is capable of perceiving, localizing, grasping, and delivering non-compliant food items to an individual. We trained an object recognition network to detect specific food items, and we compute the grasp pose for each item. Human input is obtained through an interface consisting of an eye-tracker and a display screen. The human selects options on the monitor with their eye and head movements and triggers responses with mouth movements. We performed a pilot study with four able-bodied participants and one participant with a spinal cord injury (SCI) to evaluate the performance of our prototype system. Participants selected food items with their eye movements, which were then delivered by the robot. We observed an average overall feeding success rate of 89.1% and an average overall task time of $31.4 \pm 2.4$ seconds per food item. The SCI participant gave scores of 90.0 and 8.3 on the System Usability Scale and NASA Task Load Index, respectively. We also conducted a custom, post-study interview to gather participant feedback to drive future design decisions. The quantitative results and qualitative user feedback demonstrate the feasibility of robot-assisted self-feeding and justify continued research into mealtime-related assistive devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI