序贯最小优化
组块(心理学)
二次规划
计算机科学
计算
算法
工作集
二次方程
支持向量机
最小二乘支持向量机
人工智能
数学
数学优化
几何学
操作系统
出处
期刊:Microsoft Research Technical Report
日期:1998-04-21
卷期号:: 21-
被引量:633
摘要
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI