We consider distributed optimization where N agents in a network minimize the sum equation of their individual convex costs. To solve the described problem, existing literature proposes distributed gradient-like algorithms that are attractive due to computationally simple iterations k, but have a drawback of slow convergence (in k) to a solution. We propose a distributed gradient-like algorithm, that we build from the (centralized) Nesterov gradient method. For the convex f i ‘s with Lipschitz continuous and bounded gradients, we show that our method converges at rate O(log k/k). The achieved rate significantly improves over the convergence rate of existing distributed gradient-like methods, while the proposed algorithm maintains the same communication cost per k and a very similar computational cost per k. We further show that the rate O(log k/k) still holds if the bounded gradients assumption is replaced by a certain linear growth assumption. We illustrate the gains obtained by our method on two simulation examples: acoustic source localization and learning a linear classifier based on l 2 -regularized logistic loss.