Articles

Jakovetic D., Xavier J., Moura J.M.F.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2010
Abstract:
We consider the weight design problem for the consensus algorithm under a finite time horizon. We assume that the underlying network is random where the links fail at each iteration with certain probability and the link failures can be spatially correlated. We formulate a family of weight design criteria (objective functions) that minimize n, n = 1, …,N (out of N possible) largest (slowest) eigenvalues of the matrix that describes the mean squared consensus error dynamics. We show that the objective functions are convex; hence, globally optimal weights (with respect to the design criteria) can be efficiently obtained. Numerical examples on large scale, sparse random networks with spatially correlated link failures show that: 1) weights obtained according to our criteria lead to significantly faster convergence than the choices available in the literature; 2) different design criteria that corresponds to different n, exhibits very interesting tradeoffs: faster transient performance leads to slower long time run performance and vice versa. Thus, n is a valuable degree of freedom and can be appropriately selected for the given time horizon.
Jakovetic D., Moura J.M.F., Xavier J.
2012 50th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2012
2012
Abstract:
We consider the tradeoffs between sensing and communication in a consensus+innovations distributed detection problem when the local communications among agents are noisy. Intuitively, we can expect that the error performance of the distributed detector is affected by both the sensing noise and the noise corrupting the communication among agents in the network. Too little communication (cooperation) and the distributed detector error performance will be dominated by the sensing noise. Too much communication and the detector error performance is dominated by the communication noise. We make this tradeoff precise through a large deviations analysis, i.e., by studying the exponential decay rate of the probability of error of the consensus+innovations distributed detector at each agent. Under a mild assumption of network connectedness, we show: 1) the weight sequences affecting the consensus and innovations potentials in the distributed detector need to be carefully designed for the error probability at every agent detector to decay exponentially fast; 2) the network exhibits a phase transition with respect to the communication noise power. Below a threshold on the communication noise power, cooperation (communication) among agents improves the error detection performance; above threshold, inter-agent communication does not enhance the error detection performance.
Jakovetic D., Xavier J., Moura J.M.F.
2012 20th Telecommunications Forum, TELFOR 2012 - Proceedings
2012
Abstract:
We derive the convergence rate of a distributed gradient algorithm for smooth optimization in networked systems. We assume that each agent in the network has a convex cost function f i (x), known only to agent i, and the goal is to minimize the sum Σ N i=1 fi(x) of all agents’ costs; such problem formulation arises in various networked applications, like, e.g., distributed inference or source localization in sensor networks. With the distributed gradient algorithm under study, each agent, at each iteration k, performs a weighted average of its own and its neighbors’ solution estimates, and performs a step in the direction of the negative of its local function’s gradient. We establish a novel result that the distributed gradient algorithm has the convergence rate O(1/k 2/3 ), in terms of the cost function optimality gap, under the assumption of convex f i ‘s with Lipschitz continuous and bounded gradients.
Jakovetic D., Xavier J.M.F., Moura J.M.F.
IEEE Transactions on Signal Processing
2014
Abstract:
We consider distributed optimization in random networks where N nodes cooperatively minimize the sum Σ i=1 N f i (x) of their individual convex costs. Existing literature proposes distributed gradient-like methods that are computationally cheap and resilient to link failures, but have slow convergence rates. In this paper, we propose accelerated distributed gradient methods that 1) are resilient to link failures; 2) computationally cheap; and 3) improve convergence rates over other gradient methods. We model the network by a sequence of independent, identically distributed random matrices {W(k)} drawn from the set of symmetric, stochastic matrices with positive diagonals. The network is connected on average and the cost functions are convex, differentiable, with Lipschitz continuous and bounded gradients. We design two distributed Nesterov-like gradient methods that modify the D-NG and D-NC methods that we proposed for static networks. We prove their convergence rates in terms of the expected optimality gap at the cost function. Let k and K be the number of per-node gradient evaluations and per-node communications, respectively. Then the modified D-NG achieves rates O(logk/k) and O(logK/ K), and the modified D-NC rates O(1/k 2 ) and O(1/ K 2-ξ ), where ξ > 0 is arbitrarily small. For comparison, the standard distributed gradient method cannot do better than Ω(1/k 2/3 ) and Ω(1/ K 2/3 ), on the same class of cost functions (even for static networks). Simulation examples illustrate our analytical findings.
Jakovetic D., Xavier J., Moura J.M.F.
IEEE Transactions on Signal Processing
2011
Abstract:
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x = x*. The objective function of the corresponding optimization problem is the sum of private (known only by a node), convex, nodes’ objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, de-centralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping), and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping). The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures), the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l 1 -regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.
Toninho B., Caires L., Pfenning F.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
2014
Abstract:
Session types are widely accepted as an expressive discipline for structuring communications in concurrent and distributed systems. In order to express infinitely unbounded sessions, session typed languages often include general recursion which may introduce undesirable divergence, e.g., infinite unobservable reduction sequences. In this paper we address, by means of typing, the challenge of ensuring non-divergence in a session-typed -calculus with general (co)recursion, while still allowing interesting infinite behaviors to be definable. Our approach builds on a Curry-Howard correspondence between our type system and linear logic extended with co-inductive types, for which our non-divergence property implies consistency. We prove type safety for our framework, implying protocol compliance and global progress of well-typed processes. We also establish, using a logical relation argument, that well-typed processes are compositionally non-divergent, that is, that no well-typed composition of processes, including those dynamically assembled via name passing, can result in divergent behavior.
Lueken C., Cohen G.E., Apt J.
Environmental Science and Technology
2012
Abstract:
We compare the power output from a year of electricity generation data from one solar thermal plant, two solar photovoltaic (PV) arrays, and twenty Electric Reliability Council of Texas (ERCOT) wind farms. The analysis shows that solar PV electricity generation is approximately one hundred times more variable at frequencies on the order of 10–3 Hz than solar thermal electricity generation, and the variability of wind generation lies between that of solar PV and solar thermal. We calculate the cost of variability of the different solar power sources and wind by using the costs of ancillary services and the energy required to compensate for its variability and intermittency, and the cost of variability per unit of displaced CO2 emissions. We show the costs of variability are highly dependent on both technology type and capacity factor. California emissions data were used to calculate the cost of variability per unit of displaced CO2 emissions. Variability cost is greatest for solar PV generation at $8–11 per MWh. The cost of variability for solar thermal generation is $5 per MWh, while that of wind generation in ERCOT was found to be on average $4 per MWh. Variability adds ∼$15/tonne CO2 to the cost of abatement for solar thermal power, $25 for wind, and $33–$40 for PV.
Silva R.
PhD Thesis
2020
Abstract:
Planning under uncertainty using Markov decision processes (MDPs) requires a model of the environment specifying the probabilistic effects of the actions that the agent is able to execute. The optimal policy of the agent is then computed from such model. As such, when planning, the agent assumes the environment may only change as the direct result of its actions. In this thesis we consider lifting that assumption, and allow the agent to reason over the counterfactual “What if the world were different?” Effectively, we allow the agent to reason over other possible configurations of the world, where more rewarding optimal policies may exist, and over the cost required in shifting the original environment to these modified worlds. Our goal is to endow the agent with the ability to plan over the possibility of actually operating in such configurations of the world, if beneficial. We introduce Counterfactual MDPs, a new class of MDPs that allows the agent to reason and plan over the aforementioned counterfactual. Solving a Counterfactual MDP consists in the maximization of the expected value/cost trade-off over possible changes to the world. In the context of MDPs, the dynamics of the world are described in terms of transition probabilities. Our approach is thus to formulate the problem as a jointoptimization of the transition probabilities and optimal policy of the MDP. We analyze the complexity of the resulting problem, and formally prove it is NP-Hard. We then derive two gradient-based approaches for solving the problem. These approaches culminate in the contribution of an iterative gradient based algorithm, PITERATION, for solving Counterfactual MDPs. Additionally, we discuss methods for scaling up this algorithm to larger problems. We demonstrate the applicability of Counterfactual MDPs and the performance of the algorithms proposed in multiple scenarios. We show, in particular, significant performance improvements that arise from allowing the agent to reason and plan over other possible worlds, and corresponding optimal policies. In the process we realize, however, that Counterfactual MDPs implicitly assume that the specific world configuration the agent envisioned will be necessarily materialized. However, in many real-life scenarios there exists an underlying uncertainty in the outcome of applying changes to the world. We extend the Counterfactual MDP model to allow the agent to reason over this uncertainty, and dub the resulting model Stochastic Outcomes Counterfactual MDPs. This new model assumes the uncertainty associated with changes to the world follows a probabilistic distribution with parameters the agent can reason over and control, resulting in a new optimization problem. We show the gradient of this new optimization problem can be computed by solving an expectation, and thus propose a sampling based method for computing it. This allows us to extend P-ITERATION to this new class of problems with stochastic outcomes. In the end we demonstrate the applicability of the new model in multiple scenarios with uncertainty in the outcome of changes to the world. We show that by reasoning over this uncertainty the agent is able to find more valuable world configurations.
Melo T., Mendonça A.M., Campilho A.
ICIAR 2018: Image Analysis and Recognition
2018
Abstract:
The creation of retinal mosaics from sets of fundus photographs can significantly reduce the time spent on the diabetic retinopathy (DR) screening, because through mosaic analysis the ophthalmologists can examine several portions of the eye at a single glance and, consequently, detect and grade DR more easily. Like most of the methods described in the literature, this methodology includes two main steps: image registration and image blending. In the registration step, relevant keypoints are detected on all images, the transformation matrices are estimated based on the correspondences between those keypoints and the images are reprojected into the same coordinate system. However, the main contributions of this work are in the blending step. In order to combine the overlapping images, a color compensation is applied to those images and a distance-based map of weights is computed for each one. The methodology is applied to two different datasets and the mosaics obtained for one of them are visually compared with the results of two state-of-the-art methods. The mosaics obtained with our method present good quality and they can be used for DR grading.