工作流程
计算机科学
边距(机器学习)
人工智能
注释
一致性(知识库)
机器学习
钥匙(锁)
领域知识
数据挖掘
模式识别(心理学)
数据库
计算机安全
作者
Xueying Shi,Yueming Jin,Qi Dou,Pheng‐Ann Heng
标识
DOI:10.1016/j.media.2021.102158
摘要
• A novel S emi- S upervised L earning method for label-efficient Surg ical workflow recognition ( SurgSSL ), which progressively utilizes unlabeled data in two learning stages, from implicit excavation to explicit excavation. • A novel intra-sequence Visual and Temporal Dynamic Consistency (VTDC) scheme for implicit excavation from unlabeled data. By adding regularization from both visual and temporal perspectives, it encourages model to excavate motion cues from unlabeled videos. • Pre-knowledge pseudo label is designed to continue to optimize the model for explicit excavation from unlabeled data. With prior unlabeled data knowledge encoded for the Pre-knowledge pseudo label, it demonstrates more precise supervision capability compared with conventional pseudo labels. • Outstanding experimental results shown on two popular benchmark surgical phase recognition dataset demonstrate the effectiveness of our SurgSSL method. Surgical workflow recognition is a fundamental task in computer-assisted surgery and a key component of various applications in operating rooms. Existing deep learning models have achieved promising results for surgical workflow recognition, heavily relying on a large amount of annotated videos. However, obtaining annotation is time-consuming and requires the domain knowledge of surgeons. In this paper, we propose a novel two-stage S emi- S upervised L earning method for label-efficient Surg ical workflow recognition, named as SurgSSL . Our proposed SurgSSL progressively leverages the inherent knowledge held in the unlabeled data to a larger extent: from implicit unlabeled data excavation via motion knowledge excavation, to explicit unlabeled data excavation via pre-knowledge pseudo labeling. Specifically, we first propose a novel intra-sequence Visual and Temporal Dynamic Consistency (VTDC) scheme for implicit excavation. It enforces prediction consistency of the same data under perturbations in both spatial and temporal spaces, encouraging model to capture rich motion knowledge. We further perform explicit excavation by optimizing the model towards our pre-knowledge pseudo label. It is naturally generated by the VTDC regularized model with prior knowledge of unlabeled data encoded, and demonstrates superior reliability for model supervision compared with the label generated by existing methods. We extensively evaluate our method on two public surgical datasets of Cholec80 and M2CAI challenge dataset. Our method surpasses the state-of-the-art semi-supervised methods by a large margin, e.g., improving 10.5% Accuracy under the severest annotation regime of M2CAI dataset. Using only 50% labeled videos on Cholec80, our approach achieves competitive performance compared with full-data training method.
科研通智能强力驱动
Strongly Powered by AbleSci AI