Hugo Oliveira,Roberto M. César,Pedro H. T. Gama,Jefersson A. dos Santos
标识
DOI:10.1109/sibgrapi55357.2022.9991767
摘要
Automatic and semi-automatic radiological image segmentation can help physicians in the processing of real-world medical data for several tasks such as detection/diagnosis of diseases and surgery planning. Current segmentation methods based on neural networks are highly data-driven, often requiring hundreds of laborious annotations to properly converge. The generalization capabilities of traditional supervised deep learning are also limited by the insufficient variability present in the training dataset. One very proliferous research field that aims to alleviate this dependence on large numbers of labeled data is Meta-Learning. Meta-Learning aims to improve the generalization capabilities of traditional supervised learning by training models to learn in a label efficient manner. In this tutorial we present an overview of the literature and proposed ways of merging this body of knowledge with deep segmentation architectures to produce highly adaptable multi-task meta-models for few-shot weakly-supervised semantic segmentation. We introduce a taxonomy to categorize Meta-Learning methods for both classification and segmentation, while also discussing how to adapt potentially any few-shot meta-learner to a weakly-supervised segmentation task.