过度拟合
最优化问题
数学优化
班级(哲学)
最大似然
计算机科学
凸优化
点(几何)
帧(网络)
数学
正多边形
统计
人工智能
电信
几何学
人工神经网络
标识
DOI:10.1002/9781119092919.ch23
摘要
Maximum likelihood estimation (MLE) is a very general way to frame a large class of problems in data science. There are two significant problems with MLE in general. The first is overfitting. The other problem with MLE is the logistical problem of actually calculating the optimal θ. Numerical optimization is the way that many MLE problems get solved in the real world, but it is useful in many other domains of application as well. This chapter gives an intuitive idea of what's going on under the hood when a numerical optimization routine runs. It explains what's called “convex optimization”, which is a large class of optimization problems where algorithms are guaranteed to converge on the correct solution. Logistic regression is based on the probability model that p(x) is the probability that a real-world point at x will be a 1, rather than a 0.
科研通智能强力驱动
Strongly Powered by AbleSci AI