The paper considers distributed learning in large-scale games via fictitious-play type algorithms. Given a preassigned communication graph structure for information exchange among the players, this paper studies a distributed implementation of the Empirical Centroid Fictitious Play (ECFP) algorithm that is well-suited to large-scale games in terms of complexity and memory requirements. It is shown that the distributed algorithm converges to an equilibrium set denoted as the mean-centric equilibria (MCE) for a reasonably large class of games.