A camera-direction dependent visual-motor coordinate transformation for a visually guided neural robot

作者:

Highlights:

摘要

Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of one’s head and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose a simple and fully trained attractor network which computes head-centred coordinates given eye position and a perceived retinal object position. We demonstrate this system on artificial data and then apply it within a fully neurally implemented control system which visually guides a simulated robot to a table for grasping an object. The integrated system has as input a primitive visual system with a what–where pathway which localises the target object in the visual field. The coordinate transform network considers the visually perceived object position and the camera pan-tilt angle and computes the target position in a body-centred frame of reference. This position is used by a reinforcement-trained network to dock a simulated PeopleBot robot at a table for reaching the object. Hence, neurally computing coordinate transformations by an attractor network has biological relevance and technical use for this important class of computations.

论文关键词:Frame of reference transformations,Neural networks,Boltzmann machine,Reinforcement learning,Robotics

论文评审过程:Received 28 October 2005, Accepted 28 November 2005, Available online 17 February 2006.

论文官网地址:https://doi.org/10.1016/j.knosys.2005.11.020