Visual object-action recognition: Inferring object affordances from human demonstration
作者:
Highlights:
•
摘要
This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.
论文关键词:
论文评审过程:Received 12 March 2010, Accepted 14 August 2010, Available online 20 August 2010.
论文官网地址:https://doi.org/10.1016/j.cviu.2010.08.002