正规化(语言学)
均方误差
数学
应用数学
平滑的
平滑度
收敛速度
数学优化
数学分析
统计
计算机科学
计算机网络
频道(广播)
人工智能
作者
Grace Wahba,Yonghua Wang
标识
DOI:10.1080/03610929008830285
摘要
Abstract We investigate the behavior of the optimal regularization parameter in the method of regularization for solving first kind integral equations with noisy data, under a range of definitions of “optimal”, varying from mean square error in higher derivatives of the solution, to mean square error in the predicted data. We study how the optimal regularization parameter changes when the optimality criteria changes, under a broad range of smoothness assumptions on the solution, the kernel of the integral operator, and the penalty functional. Although some of the calculations we present have been given elsewhere, we organize the results with a specific god in mind. That Is, we study a certain class of problems within which we can identify conditions on the solution, the kernel of the operator and the penalty functional for which the rate at which the optimal regularization parameter goes to zero is the same for both predictive mean square error and solution mean square error optimality criteria, and for which it is different. The former circumstances are of interest because then data based estimates of the regularization parameter such as generalized cross-validation, which are known to be optimal for predictive mean square error, will also go to zero at the optimal rate for solution mean square error. Keywords: convergence ratesmethod of regularizationdecon volutionoptimal smoothing parameter
科研通智能强力驱动
Strongly Powered by AbleSci AI