Simulating belief systems of autonomous agents
作者:
摘要
Autonomous agents in computer simulations do not have the usual mechanisms to acquire information as do their human counterparts. In many such simulations, it is not desirable that the agent have access to complete and correct information about its environment. We examine how imperfection in available information may be simulated in the case of autonomous agents. We determine probabilistically what the agent may detect, through hypothetical sensors, in a given situation. These detections are combined with the agent's knowledge base to infer observations and beliefs. Inherent in this task is a degree of uncertainty in choosing the most appropriate observation or belief. We describe and compare two approaches — a numerical approach and one based on defeasible logic — for simulating an appropriate belief in light of conflicting detection values at a given point in time. We discuss the application of this technique to autonomous forces in combat simulation systems.
论文关键词:Belief simulation,Belief generation,Autonomous agent,Distributed interactive simulation,Belief revision,Defeasible reasoning
论文评审过程:Available online 16 December 1999.
论文官网地址:https://doi.org/10.1016/0167-9236(94)00036-R