计算机科学
领域(数学分析)
人工智能
代表(政治)
服装
点(几何)
领域(数学)
机器人
仿人机器人
合成数据
标记数据
点云
数据点
机器学习
计算机视觉
几何学
数学
考古
政治
政治学
纯数学
法学
历史
数学分析
作者
Jinge Qie,Yixing Gao,Runyang Feng,Xin Wang,Jielong Yang,Esha Dasgupta,Hyung Jin Chang,Yi Chang
标识
DOI:10.1007/978-3-031-25075-0_44
摘要
Assistive robots can significantly reduce the burden of daily activities by providing services such as unfolding clothes and dressing assistance. For robotic clothes manipulation tasks, grasping point recognition is one of the core steps, which is usually achieved by supervised deep learning methods using large amount of labeled training data. Given that collecting real annotated data is extremely labor-intensive and time-consuming in this field, synthetic data generated by physics engines is typically adopted for data enrichment. However, there exists an inherent discrepancy between real and synthetic domains. Therefore, effectively leveraging synthetic data together with real data to jointly train models for grasping point recognition is desirable. In this paper, we propose a Cross-Domain Representation Learning (CDRL) framework that adaptively extracts domain-specific features from synthetic and real domains respectively, before further fusing these domain-specific features to produce more informative and robust cross-domain representations, thereby improving the prediction accuracy of grasping points. Experimental results show that our CDRL framework is capable of recognizing grasping points more precisely compared with five baseline methods. Based on our CDRL framework, we enable a Baxter humanoid robot to unfold a hanging white coat with a 92% success rate and assist 6 users to dress successfully.
科研通智能强力驱动
Strongly Powered by AbleSci AI