Articles

Mota J.F.C., Xavier J.M.F., Aguiar P.M.Q., Puschel M.
Proceedings of the IEEE Conference on Decision and Control
2012
Abstract:
Many problems in control can be modeled as an optimization problem over a network of nodes. Solving them with distributed algorithms provides advantages over centralized solutions, such as privacy and the ability to process data locally. In this paper, we solve optimization problems in networks where each node requires only partial knowledge of the problem’s solution. We explore this feature to design a decentralized algorithm that allows a significant reduction in the total number of communications. Our algorithm is based on the Alternating Direction of Multipliers (ADMM), and we apply it to distributed Model Predictive Control (MPC) and TCP/IP congestion control. Simulation results show that the proposed algorithm requires less communications than previous work for the same solution accuracy.
Jakovetic D., Moura J.M.F., Xavier J.
2013 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013 - Proceedings
2013
Abstract:
This paper presents explicit convergence rates for a class of deterministic distributed augmented Lagrangian methods. The expressions for the convergence rates show the dependence on the underlying network parameters. Simulations illustrate the analytical results.
Mota J.F.C., Xavier J.M.F., Aguiar P.M.Q., Puschel M.
IEEE Transactions on Signal Processing
2012
Abstract:
We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least â„“ 1 -norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multi- pliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.
Mota J.F.C., Xavier J.M.F., Aguiar P.M.Q., Puschel M.
2013 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013 - Proceedings
2013
Abstract:
Reconstructing compressed sensing signals involves solving an optimization problem. An example is Basis Pursuit (BP) [1], which is applicable only in noise-free scenarios. In noisy scenarios, either the Basis Pursuit Denoising (BPDN) [1] or the Noise-Aware BP (NABP) [2] can be used. Consider a distributed scenario where the dictionary matrix and the vector of observations are spread over the nodes of a network. We solve the following open problem: design distributed algorithms that solve BPDN with a column partition, i.e., when each node knows only some columns of the dictionary matrix, and that solve NABP with a row partition, i.e., when each node knows only some rows of the dictionary matrix and the corresponding observations. Our approach manipulates these problems so that a recent general-purpose algorithm for distributed optimization can be applied.
Simmons R.J., Toninho B., Pfenning F.
Proceedings of the 2011 ACM SIGPLAN X10 Workshop, X10 '11
2011
Abstract:
Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling
Jakovetic D., Moura J.M.F., Xavier J.
IEEE Transactions on Signal Processing
2012
Abstract:
We study the large deviations performance of consensus+innovations distributed detection over noisy networks, where agents at a time step cooperate with their immediate neighbors (consensus) and assimilate their new observations (innovation.) We show that, under noisy communication, all agents can still achieve an exponential error rate, even when certain (or most) agents cannot detect the event of interest in isolation. The key to achieving this is the appropriate design of the time-varying weight sequence {α k = b 0 /(a + k)} by which agents weigh their neighbors’ messages. We and a communication payoff threshold on the communication noise power, i.e., the critical noise power below which cooperation among neighbors improves detection performance and above which the noise in the communication among agents overwhelms the distributed detector performance. Numerical examples illustrate several tradeoffs among network parameters and between the time (or number of measurements) needed for a reliable decision and the transmission power invested by the agents.
Bajovic D., Jakovetic D., Xavier J., Sinopoli B., Moura J.M.F.
2010 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2010
2010
Abstract:
We apply large deviations theory to study asymptotic performance of running consensus distributed detection in sensor networks. Running consensus is a stochastic approximation type algorithm, recently proposed. At each time step k, the state at each sensor is updated by a local averaging of the sensor’s own state and the states of its neighbors (consensus) and by accounting for the new observations (innovation).We assume Gaussian, spatially correlated observations. We allow the underlying network be time varying, provided that the graph that collects the union of links that are online at least once over a finite time window is connected. This paper shows through large deviations that, under stated assumptions on the network connectivity and sensors’ observations, the running consensus detection asymptotically approaches in performance the optimal centralized detection. That is, the Bayes probability of detection error (with the running consensus detector) decays exponentially to zero as k → ∞ at the Chernoff information rate-the best achievable rate of the asymptotically optimal centralized detector.
Bajovic D., Jakovetic D., Xavier J., Sinopoli B., Moura J.M.F.
IEEE Transactions on Signal Processing
2011
Abstract:
We study, by large deviations analysis, the asymptotic performance of Gaussian running consensus distributed detection over random networks; in other words, we determine the exponential decay rate of the detection error probability. With running consensus, at each time step, each sensor averages its decision variable with the neighbors’ decision variables and accounts on-the-fly for its new observation. We show that: 1) when the rate of network information flow (the speed of averaging) is above a threshold, then Gaussian running consensus is asymptotically equivalent to the optimal centralized detector, i.e., the exponential decay rate of the error probability for running consensus equals the Chernoff information; and 2) when the rate of information flow is below a threshold, running consensus achieves only a fraction of the Chernoff information rate. We quantify this achievable rate as a function of the network rate of information flow. Simulation examples demonstrate our theoretical findings on the behavior of running consensus detection over random networks.
Bajovic D., Moura J.M.F., Xavier J., Sinopoli B.
IEEE Transactions on Signal Processing
2016
Abstract:
We find large deviations rates for consensus-based distributed inference for directed networks. When the topology is deterministic, we establish the large deviations principle and find exactly the corresponding rate function, equal at all nodes. We show that the dependence of the rate function on the stochastic weight matrix associated with the network is fully captured by its left eigenvector corresponding to the unit eigenvalue. Further, when the sensors’ observations are Gaussian, the rate function admits a closed-form expression. Motivated by these observations, we formulate the optimal network design problem of finding the left eigenvector that achieves the highest value of the rate function, for a given target accuracy. This eigenvector therefore minimizes the time that the inference algorithm needs to reach the desired accuracy. For Gaussian observations, we show that the network design problem can be formulated as a semidefinite (convex) program, and hence can be solved efficiently. When observations are identically distributed across agents, the system exhibits an interesting property: the graph of the rate function always lies between the graphs of the rate function of an isolated node and the rate function of a fusion center that has access to all observations. We prove that this fundamental property holds even when the topology and the associated system matrices change randomly over time, with arbitrary distribution. Due to the generality of its assumptions, the latter result requires more subtle techniques than the standard large deviations tools, contributing to the general theory of large deviations.