计算机科学
人工智能
任务(项目管理)
班级(哲学)
人工神经网络
二进制数
二元分类
节点(物理)
深度学习
模式识别(心理学)
上下文图像分类
机器学习
过程(计算)
图像(数学)
支持向量机
数学
算术
操作系统
工程类
经济
管理
结构工程
作者
Yan Huang,Wei Wang,Liang Wang,Tieniu Tan
标识
DOI:10.1109/icip.2013.6738596
摘要
This paper proposes a multi-task deep neural network (MT-DNN) architecture to handle the multi-label learning problem, in which each label learning is defined as a binary classification task, i.e., a positive class for "an instance owns this label" and a negative class for "an instance does not own this label". Multi-label learning is accordingly transformed to multiple binary-class classification tasks. Considering that a deep neural nets (DNN) architecture can learn good intermediate representations shared across tasks, we generalize one classification task of traditional DNN into multiple binary classification tasks through defining the output layer with a negative class node and a positive class node for each label. After a similar pretraining process to deep belief nets, we redefine the label assignment error of MT-DNN and perform the back-propagation algorithm to fine-tune the network. To evaluate the proposed model, we carry out image annotation experiments on two public image datasets, with 2000 images and 30,000 images respectively. The experiments demonstrate that the proposed model achieves the state-of-the-art performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI