Generative model based robotic grasp pose prediction with limited dataset

作者:Priya Shukla, Nilotpal Pramanik, Deepesh Mehta, G. C. Nandi

摘要

In the present investigation, we propose an architecture which we name as Generative Inception Neural Network (GI-NNet), capable of predicting antipodal robotic grasps intelligently, on seen as well as unseen objects. It is trained on Cornell Grasping Dataset (CGD) and attains a 98.87% grasp pose accuracy for detecting both regular/irregular shaped objects from RGB-Depth images while requiring only one-third of the network trainable parameters as compared to the existing approaches. However, to attain this level of performance the model requires the entire 90% of the available labelled data of CGD keeping only 10% labelled data for testing which makes it vulnerable to poor generalization. Furthermore, getting a sufficient and quality labelled dataset for robot grasping is extremely difficult. To address these issues, we subsequently propose another architecture where our proposed GI-NNet model is attached as a decoder of a Vector Quantized Variational Auto-Encoder (VQ-VAE), which works more efficiently when trained both with the available labelled and unlabelled data. The proposed model, which we name as Representation based GI-NNet (RGI-NNet) has been trained utilizing the various split of available CGD dataset to test the learning ability of our architecture starting from only 10% label data with the latent embedding of VQ-VAE to 90% label data with the latent embedding. However, being trained with only 50% label data of CGD with latent embedding, the proposed architecture produces the best results which, we believe, is a remarkable accomplishment. The logical reasoning of this together with the other relevant technological details have been elaborated in this paper. The performance level, in terms of grasp pose accuracy of RGI-NNet, varies between 92.1348% to 97.7528% which is far better than several existing models trained with only labelled dataset. For the performance verification of both the proposed models, GI-NNet and RGI-NNet, we have performed rigorous experiments on Anukul (Baxter) hardware cobot.

论文关键词:Intelligent robot grasping, Generative inception neural network, Vector quantized variational auto-encoder, Representation based generative inception neural network

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-03011-z