作者
Xiaogang Wang,Mei Liu,Qiao He,Mingqi Wang,Jiayue Xu,Ling Li,Guowei Li,He Lin,Kang Zou,Xin Sun
摘要
Background and Objectives Among observational studies of routinely collected health data (RCD) for exploring treatment effects, algorithms are used to identify study variables. However, the extent to which algorithms are reliable and impact the credibility of effect estimates is far from clear. This study aimed to investigate the validation of algorithms for identifying study variables from RCD, and examine the impact of alternative algorithms on treatment effects. Methods We searched PubMed for observational studies published in 2018 that used RCD to explore drug treatment effects. Information regarding the reporting, validation, and interpretation of algorithms was extracted. We summarized the reporting and methodological characteristics of algorithms and validation. We also assessed the divergence in effect estimates given alternative algorithms by calculating the ratio of estimates of the primary vs. alternative analyses. Results A total of 222 studies were included, of which 93 (41.9%) provided a complete list of algorithms for identifying participants, 36 (16.2%) for exposure, and 132 (59.5%) for outcomes, and 15 (6.8%) for all study variables including population, exposure, and outcomes. Fifty-nine (26.6%) studies stated that the algorithms were validated, and 54 (24.3%) studies reported methodological characteristics of 66 validations, among which 61 validations in 49 studies were from the cross-referenced validation studies. Of those 66 validations, 22 (33.3%) reported sensitivity and 16 (24.2%) reported specificity. A total of 63.6% of studies reporting sensitivity and 56.3% reporting specificity used test-result-based sampling, an approach that potentially biases effect estimates. Twenty-eight (12.6%) studies used alternative algorithms to identify study variables, and 24 reported the effects estimated by primary analyses and sensitivity analyses. Of these, 20% had differential effect estimates when using alternative algorithms for identifying population, 18.2% for identifying exposure, and 45.5% for classifying outcomes. Only 32 (14.4%) studies discussed how the algorithms may affect treatment estimates. Conclusion In observational studies of RCD, the algorithms for variable identification were not regularly validated, and–even if validated–the methodological approach and performance of the validation were often poor. More seriously, different algorithms may yield differential treatment effects, but their impact is often ignored by researchers. Strong efforts, including recommendations, are warranted to improve good practice.