Mining deep And-Or object structures via cost-sensitive question-answer-based active annotations
作者:
Highlights:
•
摘要
This paper presents a cost-sensitive active Question-Answering (QA) framework for learning a nine-layer And-Or graph (AOG) from web images. The AOG explicitly represents object categories, poses/viewpoints, parts, and detailed structures within the parts in a compositional hierarchy. The QA framework is designed to minimize an overall risk, which trades off the loss and query costs. The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers). The cost comprises both the human labor of answering questions and the computational cost of model learning. The cost-sensitive QA framework iteratively selects different storylines of questions to update different nodes in the AOG. Experiments showed that our method required much less human supervision (e.g. labeling parts on 3–10 training objects for each category) and achieved better performance than baseline methods.
论文关键词:
论文评审过程:Received 17 August 2017, Revised 15 September 2018, Accepted 17 September 2018, Available online 11 October 2018, Version of Record 6 December 2018.
论文官网地址:https://doi.org/10.1016/j.cviu.2018.09.008