作者
Xinhuan Sun,Wuchao Li,Bangkang Fu,Yunsong Peng,Junjie He,Lihui Wang,Tongyin Yang,Xue Meng,Jin Li,Jinjing Wang,Ping Huang,Rongpin Wang
摘要
The pathological diagnosis of renal cell carcinoma is crucial for treatment. Currently, the multi-instance learning method is commonly used for whole-slide image classification of renal cell carcinoma, which is mainly based on the assumption of independent identical distribution. But this is inconsistent with the need to consider the correlation between different instances in the diagnosis process. Furthermore, the problem of high resource consumption of pathology images is still urgent to be solved. Therefore, we propose a new multi-instance learning method to solve this problem.In this study, we proposed a hybrid multi-instance learning model based on the Transformer and the Graph Attention Network, called TGMIL, to achieve whole-slide image of renal cell carcinoma classification without pixel-level annotation or region of interest extraction. Our approach is divided into three steps. First, we designed a feature pyramid with the multiple low magnifications of whole-slide image named MMFP. It makes the model incorporates richer information, and reduces memory consumption as well as training time compared to the highest magnification. Second, TGMIL amalgamates the Transformer and the Graph Attention's capabilities, adeptly addressing the loss of instance contextual and spatial. Within the Graph Attention network stream, an easy and efficient approach employing max pooling and mean pooling yields the graph adjacency matrix, devoid of extra memory consumption. Finally, the outputs of two streams of TGMIL are aggregated to achieve the classification of renal cell carcinoma.On the TCGA-RCC validation set, a public dataset for renal cell carcinoma, the area under a receiver operating characteristic (ROC) curve (AUC) and accuracy of TGMIL were 0.98±0.0015,0.9191±0.0062, respectively. It showcased remarkable proficiency on the private validation set of renal cell carcinoma pathology images, attaining AUC of 0.9386±0.0162 and ACC of 0.9197±0.0124. Furthermore, on the public breast cancer whole-slide image test dataset, CAMELYON 16, our model showed good classification performance with an accuracy of 0.8792.TGMIL models the diagnostic process of pathologists and shows good classification performance on multiple datasets. Concurrently, the MMFP module efficiently diminishes resource requirements, offering a novel angle for exploring computational pathology images.