A Review of Biologically Motivated Space-Variant Data Reduction Models for Robotic Vision
作者:
Highlights:
•
摘要
The primate retina performs nonlinear “image” data reduction while providing a compromise between high resolution where needed, a wide field-of-view, and small output image size. For autonomous robotics, this compromise is useful for developing vision systems with adequate response times. This paper reviews the two classes of models of retino–cortical data reduction used in hardware implementations. The first class reproduces the retina to cortex mapping based on conformal mapping functions. The pixel intensities are averaged in groups called receptive fields (RF's) which are nonoverlapping and the averaging performed is uniform. As is the case in the retina, the size of the RF's increases with distance from the center of the sensor. Implementations using this class of models are reported to run at video rates (30 frames per second). The second class of models reproduce, in addition to the variable-resolution retino–cortical mapping, the overlap feature of receptive fields of retinal ganglion cells. Achieving data reduction with this class of models is more computationally expensive due to the RF overlap. However, an implementation using such a model running at a minimum of 10 frames per second has been recently proposed. In addition to biological consistency, models with overlapping fields permit the simple selection of a variety of RF computational masks.
论文关键词:
论文评审过程:Received 30 May 1995, Accepted 1 October 1996, Available online 10 April 2002.
论文官网地址:https://doi.org/10.1006/cviu.1997.0560