计算机科学
扩散
信号处理
电信
物理
雷达
热力学
作者
Jie Chen,Cédric Richard,Ali H. Sayed
标识
DOI:10.1109/tsp.2015.2412918
摘要
The diffusion LMS algorithm has been extensively studied in recent years. This efficient strategy allows to address distributed optimization problems over networks in the case where nodes have to collaboratively estimate a single parameter vector. Problems of this type are referred to as single-task problems. Nevertheless, there are several problems in practice that are multitask-oriented in the sense that the optimum parameter vector may not be the same for every node. This brings up the issue of studying the performance of the diffusion LMS algorithm when it is run, either intentionally or unintentionally, in a multitask environment. In this paper, we conduct a theoretical analysis on the stochastic behavior of diffusion LMS in the case where the so-called single-task hypothesis is violated. We explain under what conditions diffusion LMS continues to deliver performance superior to non-cooperative strategies in the multitask environment. When the conditions are violated, we explain how to endow the nodes with the ability to cluster with other similar nodes to remove bias. We propose an unsupervised clustering strategy that allows each node to select, via adaptive adjustments of combination weights, the neighboring nodes with which it can collaborate to estimate a common parameter vector. Simulations are presented to illustrate the theoretical results, and to demonstrate the efficiency of the proposed clustering strategy. The framework is applied to a useful problem involving a multi-target tracking task.
科研通智能强力驱动
Strongly Powered by AbleSci AI