Background & AimsArtificial intelligence (AI)–based optical diagnosis systems (CADx) have been developed to allow pathology prediction of colorectal polyps during colonoscopies. However, CADx systems have not yet been validated for autonomous performance. Therefore, we conducted a trial comparing autonomous AI to AI-assisted human (AI-H) optical diagnosis.MethodsWe performed a randomized noninferiority trial of patients undergoing elective colonoscopies at 1 academic institution. Patients were randomized into (1) autonomous AI-based CADx optical diagnosis of diminutive polyps without human input or (2) diagnosis by endoscopists who performed optical diagnosis of diminutive polyps after seeing the real-time CADx diagnosis. The primary outcome was accuracy in optical diagnosis in both arms using pathology as the gold standard. Secondary outcomes included agreement with pathology for surveillance intervals.ResultsA total of 467 patients were randomized (238 patients/158 polyps in the autonomous AI group and 229 patients/179 polyps in the AI-H group). Accuracy for optical diagnosis was 77.2% (95% confidence interval [CI], 69.7–84.7) in the autonomous AI group and 72.1% (95% CI, 65.5–78.6) in the AI-H group (P = .86). For high-confidence diagnoses, accuracy for optical diagnosis was 77.2% (95% CI, 69.7–84.7) in the autonomous AI group and 75.5% (95% CI, 67.9–82.0) in the AI-H group. Autonomous AI had statistically significantly higher agreement with pathology-based surveillance intervals compared to AI-H (91.5% [95% CI, 86.9–96.1] vs 82.1% [95% CI, 76.5–87.7]; P = .016).ConclusionsAutonomous AI-based optical diagnosis exhibits noninferior accuracy to endoscopist-based diagnosis. Both autonomous AI and AI-H exhibited relatively low accuracy for optical diagnosis; however, autonomous AI achieved higher agreement with pathology-based surveillance intervals. (ClinicalTrials.gov, Number NCT05236790) Artificial intelligence (AI)–based optical diagnosis systems (CADx) have been developed to allow pathology prediction of colorectal polyps during colonoscopies. However, CADx systems have not yet been validated for autonomous performance. Therefore, we conducted a trial comparing autonomous AI to AI-assisted human (AI-H) optical diagnosis. We performed a randomized noninferiority trial of patients undergoing elective colonoscopies at 1 academic institution. Patients were randomized into (1) autonomous AI-based CADx optical diagnosis of diminutive polyps without human input or (2) diagnosis by endoscopists who performed optical diagnosis of diminutive polyps after seeing the real-time CADx diagnosis. The primary outcome was accuracy in optical diagnosis in both arms using pathology as the gold standard. Secondary outcomes included agreement with pathology for surveillance intervals. A total of 467 patients were randomized (238 patients/158 polyps in the autonomous AI group and 229 patients/179 polyps in the AI-H group). Accuracy for optical diagnosis was 77.2% (95% confidence interval [CI], 69.7–84.7) in the autonomous AI group and 72.1% (95% CI, 65.5–78.6) in the AI-H group (P = .86). For high-confidence diagnoses, accuracy for optical diagnosis was 77.2% (95% CI, 69.7–84.7) in the autonomous AI group and 75.5% (95% CI, 67.9–82.0) in the AI-H group. Autonomous AI had statistically significantly higher agreement with pathology-based surveillance intervals compared to AI-H (91.5% [95% CI, 86.9–96.1] vs 82.1% [95% CI, 76.5–87.7]; P = .016). Autonomous AI-based optical diagnosis exhibits noninferior accuracy to endoscopist-based diagnosis. Both autonomous AI and AI-H exhibited relatively low accuracy for optical diagnosis; however, autonomous AI achieved higher agreement with pathology-based surveillance intervals. (ClinicalTrials.gov, Number NCT05236790)