计算机科学
管道(软件)
人工智能
分割
词汇
水准点(测量)
自然语言处理
嵌入
班级(哲学)
利用
相似性(几何)
语义相似性
机器学习
排名(信息检索)
模式识别(心理学)
图像(数学)
语言学
地理
哲学
程序设计语言
大地测量学
计算机安全
作者
Son D. Dao,Hengcan Shi,Dinh Phung,Jianfei Cai
标识
DOI:10.1109/tmm.2023.3330102
摘要
Recent mask proposal models have significantly improved the performance of open-vocabulary semantic segmentation. However, the use of a 'background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels. Furthermore, they ignore the semantic relationship of text embeddings, which arguably can be highly informative for open-vocabulary prediction as some classes may have close relationship with other classes. To this end, this paper proposes novel class enhancement losses to bypass the use of the 'background' embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores. To further capture the relationship between base and novel classes, we propose an effective pseudo label generation pipeline using the pretrained vision-language model. Extensive experiments on several benchmark datasets show that our method achieves overall the best performance for open-vocabulary semantic segmentation. Our method is flexible, and can also be applied to the zero-shot semantic segmentation problem.
科研通智能强力驱动
Strongly Powered by AbleSci AI