Perspectives on multiagent learning
作者:
Highlights:
•
摘要
I lay out a slight refinement of Shoham et al.'s taxonomy of agendas that I consider sensible for multiagent learning (MAL) research. It is not intended to be rigid: senseless work can be done within these agendas and additional sensible agendas may arise. Within each agenda, I identify issues and suggest directions. In the computational agenda, direct algorithms are often more efficient, but MAL plays a role especially when the rules of the game are unknown or direct algorithms are not known for the class of games. In the descriptive agenda, more emphasis should be placed on establishing what classes of learning rules actually model learning by multiple humans or animals. Also, the agenda is, in a way, circular. This has a positive side too: it can be used to verify the learning models. In the prescriptive agendas, the desiderata need to be made clear and should guide the design of MAL algorithms. The algorithms need not mimic humans' or animals' learning. I discuss some worthy desiderata; some from the literature do not seem well motivated. The learning problem is interesting both in cooperative and noncooperative settings, but the concerns are quite different. For many, if not most, noncooperative settings, future work should increasingly consider the learning itself strategically.Lower bounds cut across the agendas. They can be derived on the computational complexity and on the number of interactions needed.
论文关键词:Multiagent learning,Learning in games,Reinforcement learning,Game theory
论文评审过程:Received 18 May 2006, Revised 27 February 2007, Accepted 27 February 2007, Available online 30 March 2007.
论文官网地址:https://doi.org/10.1016/j.artint.2007.02.004