数学优化
稳健性(进化)
计算机科学
增广拉格朗日法
惩罚法
最优化问题
正规化(语言学)
无线
资源配置
人工智能
数学
计算机网络
生物化学
电信
基因
化学
作者
Haibao Huang,Yun Lin,Guan Gui,Haris Gacanin,Hikmet Sari,Fumiyuki Adachi
出处
期刊:IEEE Transactions on Vehicular Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:72 (7): 9647-9652
被引量:1
标识
DOI:10.1109/tvt.2023.3250963
摘要
Unsupervised learning (UL) is widely used in the wireless resource allocation problems due to its lower computational complexity and better performance compared with traditional optimization algorithms. Since wireless resource allocation problems usually have several constraints, primal-dual learning based UL framework are widely adopted. However, the primal-dual learning approach has the problem of oscillation around the constraint threshold while training and there may be serious constraint violations when deployment. In addition, although the output of the neural network can also be restricted to the feasible region by the penalty function method, the optimality of such training methods cannot be guaranteed. In this article, we combine the primal dual learning method with the penalty function method and propose a regularized unsupervised learning (RUL) framework to enhance the robustness of the primal-dual learning based UL framework. In the proposed RUL framework, we use regularization techniques to improve the robustness of primal-dual learning by reducing the risk of constraint violations while training. A quadratic penalty term is introduced into the Lagrangian function of the wireless optimization problem where the constraints can be equivalent to equality constraints to form its augmented Lagrangian function. In the simulation, we give a simple point to point power optimization problem as an example to show that the proposed RUL can improve the robustness of constraint convergence, and can also accelerate training speed.
科研通智能强力驱动
Strongly Powered by AbleSci AI