Close this search box.

Swenson B., Kar S., Xavier J.

Conference Record - Asilomar Conference on Signals, Systems and Computers

pp 1490



The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange for agent policy update. Under the assumption of identical agent utility functions that are permutation invariant, the proposed distributed algorithm leads to convergence of the networked-averaged empirical play histories to a subset of the Nash equilibria, designated as the consensus equilibria. Applications of the proposed distributed framework to strategy design problems encountered in large-scale traffic networks are discussed.