Learning processes that converge to mixed-strategy equilibria often exhibit learning only in the weak sense in that the time-averaged empirical distribution of players’ actions converges to a set of equilibria. A stronger notion of learning mixed equilibria is to require that players period-by-period strategies converge to a set of equilibria. A simple and intuitive method is considered for adapting algorithms that converge in the weaker sense in order to obtain convergence in the stronger sense. The adaptation is applied to the the well-known fictitious play (FP) algorithm, and the adapted version of FP is shown to converge to the set of Nash equilibria in the stronger sense for games known to have the FP property.