计算机科学
审查
深层神经网络
人工智能
机器学习
残差神经网络
深度学习
数据科学
计算机安全
作者
Li Liu,Timothy M. Hospedales,Yann LeCun,Mingsheng Long,Jiebo Luo,Wanli Ouyang,Matti Pietikäinen,Tinne Tuytelaars
标识
DOI:10.1109/tpami.2023.3341723
摘要
Undoubtedly, Deep Neural Networks (DNNs), from AlexNet to ResNet to Transformer, have sparked revolutionary advancements in diverse computer vision tasks. The scale of DNNs has grown exponentially due to the rapid development of computational resources. Despite the tremendous success, DNNs typically depend on massive amounts of training data (especially the recent various foundation models) to achieve high performance and are brittle in that their performance can degrade severely with small changes in their operating environment. Generally, collecting massive-scale training datasets is costly or even infeasible, as for certain fields, only very limited or no examples at all can be gathered. Nevertheless, collecting, labeling, and vetting massive amounts of practical training data is certainly difficult and expensive, as it requires the painstaking efforts of experienced human annotators or experts, and in many cases, prohibitively costly or impossible due to some reason, such as privacy, safety or ethic issues.
科研通智能强力驱动
Strongly Powered by AbleSci AI