支持向量机
计算机科学
人工智能
人工神经网络
机器学习
铰链损耗
核(代数)
核方法
样品(材料)
回归
模式识别(心理学)
可扩展性
班级(哲学)
GSM演进的增强数据速率
深层神经网络
数学
统计
组合数学
化学
数据库
色谱法
作者
David Díaz–Vico,Jesús Prada,Adil Omari,José R. Dorronsoro
出处
期刊:Integrated Computer-aided Engineering
[IOS Press]
日期:2020-07-03
卷期号:27 (4): 389-402
被引量:19
摘要
Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems. However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in two-class SVM classification and regression. We can thus have much better scalability while achieving performances comparable to those of SVMs. Moreover, we will also show that the resulting Deep SVM models are competitive with standard DNNs in two-class classification problems but have an edge in regression ones.
科研通智能强力驱动
Strongly Powered by AbleSci AI