We study the large deviations performance of consensus+innovations distributed detection over random networks, where each sensor, at each time k, weight averages its decision variable with its neighbors decision variables (consensus), and accounts for its new observation (innovation). Sensor observations are independent identically distributed (i.i.d.) both in time and space, but have generic (non Gaussian) distributions. The underlying network is random, described by a sequence of i.i.d. stochastic, symmetric weight matrices W(k); we measure the corresponding speed of consensus by |log r|, where r is the second largest eigenvalue of the second moment of W(k). We show that distributed detection exhibits a phase transition behavior with respect to |log r|: when |log r| is above a threshold, distributed detection is equivalent to the optimal centralized detector, i.e., has the error exponent equal to the Chernoff information. We explicitly quantify the optimality threshold for |log r| as a function of the log-moment generating function Λ 0 (·) of a sensor’s log- likelihood ratio. When below the threshold, we analytically find the achievable error exponent as a function of r and Λ 0 (·). Finally, we illustrate by an example the dependence of the optimality threshold on the type of the sensor observations distribution