Multi-sensory and Multi-modal Fusion for Sentient Computing
作者:Christopher Town
摘要
This paper presents an approach to multi-sensory and multi-modal fusion in which computer vision information obtained from calibrated cameras is integrated with a large-scale sentient computing system known as “SPIRIT”. The SPIRIT system employs an ultrasonic location infrastructure to track people and devices in an office building and model their state. Vision techniques include background and object appearance modelling, face detection, segmentation, and tracking modules. Integration is achieved at the system level through the metaphor of shared perceptions, in the sense that the different modalities are guided by and provide updates to a shared world model. This model incorporates aspects of both the static (e.g. positions of office walls and doors) and the dynamic (e.g. location and appearance of devices and people) environment.
论文关键词:multi-sensory fusion, multi-modal fusion, sentient computing, object tracking, Bayesian networks
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11263-006-7834-8