计算机科学
平滑的
公制(单位)
可扩展性
排名(信息检索)
可微函数
图像检索
随机梯度下降算法
深度学习
比例(比率)
梯度下降
人工智能
图像(数学)
数据挖掘
人工神经网络
计算机视觉
数学
数据库
物理
数学分析
经济
量子力学
运营管理
作者
Andrew Brown,Weidi Xie,Vicky Kalogeiton,Andrew Zisserman
标识
DOI:10.1007/978-3-030-58545-7_39
摘要
Optimising a ranking-based metric, such as Average Precision (AP), is notoriously challenging due to the fact that it is non-differentiable, and hence cannot be optimised directly using gradient-descent methods. To this end, we introduce an objective that optimises instead a smoothed approximation of AP, coined Smooth-AP. Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks with a simple and elegant implementation. We also present an analysis for why directly optimising the ranking based metric of AP offers benefits over other deep metric learning losses. We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID, and also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, and VGGFace2 and IJB-C for face retrieval. In all cases, we improve the performance over the state-of-the-art, especially for larger-scale datasets, thus demonstrating the effectiveness and scalability of Smooth-AP to real-world scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI