计算机科学
一般化
公制(单位)
集合(抽象数据类型)
单眼
人工智能
比例(比率)
数据集
简单(哲学)
编码器
杠杆(统计)
缩放比例
机器学习
数据挖掘
数学分析
哲学
运营管理
物理
几何学
数学
认识论
量子力学
经济
程序设计语言
操作系统
作者
Lihe Yang,Bingyi Kang,Zilong Huang,Xiaogang Xu,Jiashi Feng,Hengshuang Zhao
出处
期刊:Cornell University - arXiv
日期:2024-01-01
被引量:9
标识
DOI:10.48550/arxiv.2401.10891
摘要
This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet. Our models are released at https://github.com/LiheYoung/Depth-Anything.
科研通智能强力驱动
Strongly Powered by AbleSci AI