Articles

Nambiar A., Bernardino A., Nascimento J.C.
Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,
2018
Abstract:
We propose a novel methodology for cross-context analysis in person re-identification using 3D features acquired from consumer grade depth sensors. Such features, although theoretically invariant to perspective changes, are nevertheless immersed in noise that depends on the view point, mainly due to the low depth resolution of these sensors and imperfections in skeleton reconstruction algorithms. Thus, the re-identification of persons observed on different poses requires the analysis of the features that transfer well its characteristics between view-points. Taking view-point as context, we propose a cross-context methodology to improve the re-identification of persons on different view-points. On the contrary to 2D cross-view re-identification methods, our approach is based on 3D features that do not require an explicit mapping between view-points, but nevertheless take advantage of feature selection methods that improve the re-identification accuracy
Harjula I., Hiivala M., Prabhu V.U., Toumpakaris D., Zhu H.
Next Generation Wireless Communications Using Radio over Fiber
2012
Abstract:
The aim of this chapter is to justify the need for reduced‐complexity, reduced‐overhead and cross‐layer approaches to resource allocation, scheduling and channel estimation and to present some possible approaches. A resource allocation algorithm is first examined that employs chunks of subcarriers instead of individual subcarriers, thus resulting in reduced complexity and overhead. It is shown that, by appropriately choosing the chunk size as a function of the coherence bandwidth of the channel, the algorithm can be employed in Distributed Broadband Wireless systems without significant penalty. A cross‐layer user scheduling and resource allocation is then presented. The algorithm modifies previous approaches that had focused on the sum rate, in order to also provide Quality‐of‐Service guarantees. It is demonstrated that the algorithm can improve fairness and accommodate MAC layer requests at the cost of some additional control overhead. Finally, the problem of channel estimation is considered, which is of crucial importance for the operation of scheduling and resource allocation algorithms. A scheme that relies on careful placement of pilots and superposition of pilots to data symbols is proposed. It is shown that the scheme can reduce the overhead that is required for channel estimation. Moreover, the complexity for the hardware implementation of the scheme is considered.
Godinho De Matos M., Ferreira P., Smith M.D., Telang R.
Management Science
2016
Abstract:
Peer ratings have become increasingly important sources of product information, particularly in markets for information goods. However, in spite of the increasing prevalence of this information, there are relatively few academic studies that analyze the impact of peer ratings on consumers transacting in “real-world” marketplaces. In this paper, we partner with a major telecommunications company to analyze the impact of peer ratings in a real-world video-on-demand market where consumer participation is organic and where movies are costly and well known to consumers. After experimentally changing the initial conditions of product information displayed to consumers, we find that, consistent with the prior literature, peer ratings influence consumer behavior independently from underlying product quality. However, we also find that, in contrast to the prior literature, there is little evidence of long-term bias as a result of herding effects, at least in our setting. Specifically, when movies are artificially promoted or demoted in peer rating lists, subsequent reviews cause them to return to their true quality position relatively quickly. One explanation for this difference is that consumers in our empirical setting likely had more outside information about the true quality of the products they were evaluating than did consumers in the studies reported in prior literature. Although tentative, this explanation suggests that in real-world marketplaces where consumers have sufficient access to outside information about true product quality, peer ratings may be more robust to herding effects and thus provide more reliable signals of true product quality than previously thought.
Ye C., Pallauf J., Kumar B.V.K.V., Coimbra M.T.
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
2011
Abstract:
This work presents an investigation of the potential benefits of customizing the analysis of long-term ECG signals, collected from individuals using wearable sensors, by incorporating small amount of data from these individuals in the training set of our classifiers. The global training dataset selected was from the MIT-BIH Arrhythmias Database. This proposal is validated on long-term ECG recordings collected via wearable technology in unsupervised environments, as well on the MIT-BIH Normal Sinus Rhythm Database. Results illustrate that heartbeat classification performance could improve significantly if short periods of data (e.g., data from the first 5-minutes of every 2 hours) from the specific individual are regularly selected and incorporated into the global training dataset for training a customized classifier.
DeYoung H., Caires L., Pfenning F., Toninho B.
Leibniz International Proceedings in Informatics, LIPIcs
2012
Abstract:
Prior work has shown that intuitionistic linear logic can be seen as a session-type discipline for the pi-calculus, where cut reduction in the sequent calculus corresponds to synchronous process reduction. In this paper, we exhibit a new process assignment from the asynchronous, polyadic pi-calculus to exactly the same proof rules. Proof-theoretically, the difference between these interpretations can be understood through permutations of inference rules that preserve observational equivalence of closed processes in the synchronous case. We also show that, under this new asynchronous interpretation, cut reductions correspond to a natural asynchronous buffered session semantics, where each session is allocated a separate communication buffer.
Costa R.P., Lemos J.M., Mota J.F.C., Xavier J.M.F.
2014 IEEE Conference on Control Applications, CCA 2014
2014
Abstract:
This article presents a distributed model predictive controller (MPC) based on linear models that use input/output plant data and D-ADMM optimization. The use of input/output models has the advantage of not requiring a Kalman filter to estimate the plant state. The D-ADMM algorithm solves the optimization problem associated to a cost function that is the sum of the control agents private costs, being a modification of the Alternating Direction of Multipliers (ADMM) algorithm that requires no central node and implies a significant reduction in the communication among adjacent nodes. The distributed MPC is obtained for the special case of a linear graph. An application to distributed control of a water delivery canal is presented to illustrate the algorithm.
Mota J.F.C., Xavier J.M.F., Aguiar P.M.Q., Puschel M.
IEEE Transactions on Signal Processing
2013
Abstract:
We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.
Mota J.F.C., Xavier J.M.F., Aguiar P.M.Q., Puschel M.
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2012
Abstract:
We propose a distributed, decentralized algorithm for solving separable optimization problems over a connected network of compute nodes. In a separable problem, each node has its own private function and its own private constraint set. Private means that no other node has access to it. The goal is to minimize the sum of all nodes’ private functions, constraining the solution to be in the intersection of all the private sets. Our algorithm is based on the alternating direction method of multipliers (ADMM) and requires a coloring of the network to be available beforehand. We perform numerical experiments of the algorithm, applying it to compressed sensing problems. These show that the proposed algorithm requires in general less iterations, and hence less communication between nodes, than previous algorithms to achieve a given accuracy.
Goncalves H., Correia M., Li X., Sankaranarayanan A., Tavares V.
2014 IEEE International Conference on Image Processing, ICIP 2014
2014
Abstract:
Sparse coding techniques have seen an increasing range of applications in recent years, especially in the area of image processing. In particular, sparse coding using ℓ 1 -regularization has been efficiently solved with the Augmented Lagrangian (AL) applied to its dual formulation (DALM). This paper proposes the decomposition of the dictionary matrix in its Singular Value/Vector form in order to simplify and speed-up the implementation of the DALM algorithm. Furthermore, we propose an update rule for the penalty parameter used in AL methods that improves the convergence rate. The SVD of the dictionary matrix is done as a pre-processing step prior to the sparse coding, and thus the method is better suited for applications where the same dictionary is reused for several sparse recovery steps, such as block image processing.