致盲
一致性
卡帕
随机对照试验
统计
科恩卡帕
医学
可靠性(半导体)
风险评估
计算机科学
心理学
内科学
数学
功率(物理)
物理
几何学
计算机安全
量子力学
作者
Yuan Tian,Xi Yang,Suhail A.R. Doi,Luis Furuya‐Kanamori,Lifeng Lin,Joey SW Kwong,Chang Xu
摘要
Abstract RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment ( κ = 0.25, 95% CI: 0.21–0.30), blinding of outcome assessors ( κ = 0.27, 95% CI: 0.23–0.31); While moderate for random sequence generation ( κ = 0.46, 95% CI: 0.41–0.50) and blinding of participants and personnel ( κ = 0.59, 95% CI: 0.55–0.64). The findings demonstrate that there were domain‐specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.
科研通智能强力驱动
Strongly Powered by AbleSci AI