Dynamically structuring, updating and interrelating representations of visual and linguistic discourse context

作者:

摘要

The fundamental claim of this paper is that salience—both visual and linguistic—is an important overarching semantic category structuring visually situated discourse. Based on this we argue that computer systems attempting to model the evolving context of a visually situated discourse should integrate models of visual and linguistic salience within their natural language processing (NLP) framework. The paper highlights the importance of dynamically updating and interrelating visual and linguistic discourse context representations. To support our approach, we have developed a real-time, natural language virtual reality (NLVR) system (called LIVE, for Linguistic Interaction with Virtual Environments) that implements an NLP framework based on both visual and linguistic salience. Within this framework saliency information underpins two of the core subtasks of NLP: reference resolution and the generation of referring expressions. We describe the theoretical basis and architecture of the LIVE NLP framework and present extensive evaluation results comparing the system's performance with that of human participants in a number of experiments.

论文关键词:Visual salience,Reference resolution,Generating referring expressions,Discourse context,Cross-modal representations,Synthetic vision

论文评审过程:Received 22 July 2004, Revised 21 February 2005, Accepted 14 April 2005, Available online 19 August 2005.

论文官网地址:https://doi.org/10.1016/j.artint.2005.04.008