作者
Furkan Eren,Mete Aslan,Dilek Kanarya,Yiğit Uysallı,Musa Aydın,Berna Kıraz,Ömer Aydın,Alper Kıraz
摘要
Precise and quick monitoring of key cytometric features such as cell count, size, morphology, and DNA content is crucial in life science applications. Traditionally, image cytometry relies on visual inspection of hemocytometers. This approach is error-prone due to operator subjectivity. Recently, deep learning approaches have emerged as powerful tools enabling quick and accurate image cytometry applicable to different cell types. Leading to simpler, compact, and affordable solutions, these approaches revealed image cytometry as a viable alternative to flow cytometry or Coulter counting. In this study, we demonstrate a modular deep learning system, DeepCAN, providing a complete solution for automated cell counting and viability analysis. DeepCAN employs three different neural network blocks called Parallel Segmenter, Cluster CNN, and Viability CNN that are trained for initial segmentation, cluster separation, and viability analysis. Parallel Segmenter and Cluster CNN blocks achieve accurate segmentation of individual cells while Viability CNN block performs viability classification. A modified U-Net network, a well-known deep neural network model for bioimage analysis, is used in Parallel Segmenter while LeNet-5 architecture and its modified version Opto-Net are used for Cluster CNN and Viability CNN, respectively. We train the Parallel Segmenter using 15 images of A2780 cells and 5 images of yeasts cells, containing, in total, 14742 individual cell images. Similarly, 6101 and 5900 A2780 cell images are employed for training Cluster CNN and Viability CNN models, respectively. 2514 individual A2780 cell images are used to test the overall segmentation performance of Parallel Segmenter combined with Cluster CNN, revealing high Precision/Recall/F1-Score values of 96.52%/96.45%/98.06%, respectively. Cell counting/viability performance of DeepCAN is tested with A2780 (2514 cells), A549 (601 cells), Colo (356 cells), and MDA-MB-231 (887 cells) cell images revealing high analysis accuracies of 96.76%/99.02%, 93.82%/95.93%, and 92.18%/97.90%, 85.32%/97.40%, respectively.