Localizing a polyhedral object in a robot hand by integrating visual and tactile data
作者:
Highlights:
•
摘要
We present a novel technique for localizing a polyhedral object in a robot hand by integrating visual and tactile data. Localization is performed by matching a hybrid set of visual and tactile features with corresponding model features. The matching process first determines a subset of the object's six degrees of freedom (DOFs) using the tactile feature. The remaining DOFs, which cannot be determined from the tactile feature, are then obtained by matching the visual feature. A couple of touch and vision/touch-based filtering techniques are developed to reduce the number of model feature sets that are actually matched with a given scene set. We demonstrate the performance of the technique using simulated and real data. In particular, we show its superiority over vision-based localization in the following aspects: (1) capability of determining the object pose under heavy occlusion, (2) number of generated pose hypotheses, and (3) accuracy of estimating the object depth.
论文关键词:3D object recognition,Pose estimation,Visual data,Tactile data,Sensor integration,Robot hand,Object manipulation
论文评审过程:Received 28 September 1998, Revised 2 March 1999, Accepted 2 March 1999, Available online 7 June 2001.
论文官网地址:https://doi.org/10.1016/S0031-3203(99)00059-X