Triplanar convolution with shared 2D kernels for 3D classification and shape retrieval
作者:
Highlights:
•
摘要
Increasing the depth of Convolutional Neural Networks (CNNs) has been recognized to provide better generalization performance. However, in the case of 3D CNNs, stacking layers increases the number of learnable parameters linearly, making it more prone to learn redundant features. In this paper, we propose a novel 3D CNN structure that learns shared 2D triplanar features viewed from the three orthogonal planes, which we term S3PNet. Due to the reduced dimension of the convolutions, the proposed S3PNet is able to learn 3D representations with substantially fewer learnable parameters. Experimental evaluations show that the combination of 2D representations on the different orthogonal views learned through the S3PNet is sufficient and effective for 3D representation, with the results outperforming current methods based on fully 3D CNNs. We support this with extensive evaluations on widely used 3D data sources in computer vision: CAD models, LiDAR point clouds, RGB-D images, and 3D Computed Tomography scans. Experiments further demonstrate that S3PNet has better generalization capability for smaller training sets, and learns more of kernels with less redundancy compared to kernels learned from 3D CNNs.
论文关键词:
论文评审过程:Received 9 January 2019, Revised 2 September 2019, Accepted 30 December 2019, Available online 15 January 2020, Version of Record 24 January 2020.
论文官网地址:https://doi.org/10.1016/j.cviu.2019.102901