Feed-forward versus recurrent architecture and local versus cellular automata distributed representation in reservoir computing for sequence memory learning
作者:Mrwan Margem, Osman S. Gedik
摘要
Reservoir computing based on cellular automata (ReCA) constructs a novel bridge between automata computational theory and recurrent neural networks. ReCA has been trained to solve 5-bit memory tasks. Several methods are proposed to implement the reservoir where the distributed representation of cellular automata (CA) in recurrent architecture could solve the 5-bit tasks with minimum complexity and minimum number of training examples. CA distributed representation in recurrent architecture outperforms the local representation in recurrent architecture (stack reservoir), then echo state networks and feed-forward architecture using local or distributed representation. Extracted features from the reservoir, using the natural diffusion of CA states in the reservoir offers the state-of-the-art results in terms of feature vector length and the required training examples. Another extension is obtained by combining the reservoir CA states using XOR, Binary or Gray operator to produce a single feature vector to reduce the feature space. This method gives promising results, however using the natural diffusion of CA states still outperform. ReCA can be considered to operate around the lower bound of complexity; due to using the elementary CA in the reservoir.
论文关键词:ReCA, Reservoir computing, Cellular automata, Recurrent architecture, Feed-forward architecture, Distributed representation, Local representation, 5-Bit memory task
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10462-020-09815-8