Discretizing continuous neural networks using a polarization learning rule

作者:

Highlights:

摘要

Discrete neural networks are simpler than their continuous counterparts, can obtain more stable solutions, and their hidden layer representations are easier to interpret. This paper presents a polarization learning rule for discretizing multi-layer neural networks with continuous activation functions. This rule forces the activation value of a neuron towards the two poles of its activation function. First, we use this rule in the form of a modified error function to discretize the hidden units of a back-propagation network. Then, we apply the same principle to the second-order recurrent networks to solve grammatical inference problems. The experimental results are superior to the ones using existing approaches.

论文关键词:Neural networks,Error back-propagation,Grammatical inference,Finite state automata,Discretization,Second-order recurrent networks

论文评审过程:Received 7 November 1995, Revised 10 May 1996, Available online 7 June 2001.

论文官网地址:https://doi.org/10.1016/S0031-3203(96)00082-9