期刊:IEEE transactions on systems, man, and cybernetics [Institute of Electrical and Electronics Engineers] 日期:2016-04-07卷期号:46 (7): 969-979被引量:65
标识
DOI:10.1109/tsmc.2016.2524059
摘要
The perception of the tactile and vision modalities plays a crucial role in object classification and robot precise operations. On one hand, the visual perception helps acquire the objects apparent characteristics (e.g., shape and color) that are paramount for grasp planning. On the other hand, when grasping the object, the tactile sensors can detect the softness or stiffness and the surface texture of the object, thus can be used to classify the objects. In this paper, two different tactile models are first developed to model the tactile sequences, namely bag-of-system (BoS) and deep dynamical system (DDS). To be specific, BoS applies the extreme learning machine method as the classifier, and DDS employs deep neural networks as the feature mapping functions in order to deal with the highly nonlinear data. As for visual sensing, we formulate a signature of histograms of orientations descriptor to model the shape of objects. Moreover, by using human experience, we identify the graspable components of objects on the category level instead of the object level. Finally, a fast planning method considering both contact points exaction and hand kinematics is proposed to accomplish the grasping manipulation. Comparative experiments demonstrate the effectiveness of our proposed methods on the tasks including the robot grasping and the object classification.