分割
计算机科学
边距(机器学习)
变压器
人工智能
语义学(计算机科学)
图像分割
任务(项目管理)
像素
模式识别(心理学)
自然语言处理
计算机视觉
机器学习
工程类
程序设计语言
系统工程
电压
电气工程
作者
Bowen Cheng,Ishan Misra,Alexander G. Schwing,Alexander Kirillov,Rohit Girdhar
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:9
标识
DOI:10.48550/arxiv.2112.01527
摘要
Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
科研通智能强力驱动
Strongly Powered by AbleSci AI