The well-designed logical robot: Learning and experience from observations to the Situation Calculus

作者:

Highlights:

摘要

The well-designed logical robot paradigmatically represents, in the words of McCarthy, the abilities that a robot-child should have to reveal the structure of reality within a “language of thought”. In this paper we partially support McCarthy's hypothesis by showing that early perception can trigger an inference process leading to the “language of thought”. We show this by defining a systematic transformation of structures of different formal languages sharing the same signature kernel for actions and states. Starting from early vision, visual features are encoded by descriptors mapping the space of features into the space of actions. The densities estimated in this space form the observation layer of a hidden states model labelling the identified actions as observations and the states as action preconditions and effects. The learned parameters are used to specify the probability space of a first-order probability model. Finally we show how to transform the probability model into a model of the Situation Calculus in which the learning phase has been reified into axioms for preconditions and effects of actions and, of course, these axioms are expressed in the language of thought. This shows, albeit partially, that there is an underlying structure of perception that can be brought into a logical language.

论文关键词:Visual perception,Action space,Action recognition,Parametric probability model,Learning knowledge,Inference from visual perception to knowledge representation,Theory of action,Learning theory of action from visual perception

论文评审过程:Received 21 January 2007, Revised 11 March 2010, Accepted 14 March 2010, Available online 3 April 2010.

论文官网地址:https://doi.org/10.1016/j.artint.2010.04.016