Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic

作者:

Highlights:

• We propose an accurate heuristic to determine an upper limit to convolutional depth using Shannon’s entropy measure for forced feature map compression and abstraction which reduces training time by 24.99%-78.59% across various CNN architectures and datasets.

• Our experiments empirically validate and support the findings presented in [9] and [20] that, deep CNN models behave as a collection of ensemble networks and shallower CNN models can learn the same functional representations as deeper models with a reduction in the associated trade-off of relative model training time.

摘要

•We propose an accurate heuristic to determine an upper limit to convolutional depth using Shannon’s entropy measure for forced feature map compression and abstraction which reduces training time by 24.99%-78.59% across various CNN architectures and datasets.•Our experiments empirically validate and support the findings presented in [9] and [20] that, deep CNN models behave as a collection of ensemble networks and shallower CNN models can learn the same functional representations as deeper models with a reduction in the associated trade-off of relative model training time.

论文关键词:Convolutional neural networks (CNNs),Depth redundancy,Entropy,Feature compression,EBCLE

论文评审过程:Received 17 December 2020, Revised 18 March 2021, Accepted 16 May 2021, Available online 27 May 2021, Version of Record 4 June 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108057