计算机科学
变压器
内存占用
可扩展性
分割
人工智能
机器学习
计算模型
计算机视觉
工程类
数据库
操作系统
电气工程
电压
作者
Di Wang,Qiming Zhang,Yufei Xu,Jing Zhang,Boxue Du,Dacheng Tao,Liangpei Zhang
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:34
标识
DOI:10.48550/arxiv.2208.03987
摘要
Large-scale vision foundation models have made significant progress in visual tasks on natural images, with vision transformers being the primary choice due to their good scalability and representation ability. However, large-scale models in remote sensing (RS) have not yet been sufficiently explored. In this paper, we resort to plain vision transformers with about 100 million parameters and make the first attempt to propose large vision models tailored to RS tasks and investigate how such large models perform. To handle the large sizes and objects of arbitrary orientations in RS images, we propose a new rotated varied-size window attention to replace the original full attention in transformers, which can significantly reduce the computational cost and memory footprint while learning better object representation by extracting rich context from the generated diverse windows. Experiments on detection tasks show the superiority of our model over all state-of-the-art models, achieving 81.24% mAP on the DOTA-V1.0 dataset. The results of our models on downstream classification and segmentation tasks also show competitive performance compared to existing advanced methods. Further experiments show the advantages of our models in terms of computational complexity and data efficiency in transferring.
科研通智能强力驱动
Strongly Powered by AbleSci AI