水准点(测量)
排名(信息检索)
计算机科学
百分位
机器学习
数据挖掘
班级(哲学)
过程(计算)
人工智能
统计
数学
大地测量学
操作系统
地理
作者
Raphael Trevizani,Yan Zhen,Jason Greenbaum,Alessandro Sette,Morten Nielsen,Bjoern Peters
摘要
In 2014, the Immune Epitope Database automated benchmark was created to compare the performance of the MHC class I binding predictors. However, this is not a straightforward process due to the different and non-standardized outputs of the methods. Additionally, some methods are more restrictive regarding the HLA alleles and epitope sizes for which they predict binding affinities, while others are more comprehensive. To address how these problems impacted the ranking of the predictors, we developed an approach to assess the reliability of different metrics. We found that using percentile-ranked results improved the stability of the ranks and allowed the predictors to be reliably ranked despite not being evaluated on the same data. We also found that given the rate new data are incorporated into the benchmark, a new method must wait for at least 4 years to be ranked against the pre-existing methods. The best-performing tools with statistically indistinguishable scores in this benchmark were NetMHCcons, NetMHCpan4.0, ANN3.4, NetMHCpan3.0 and NetMHCpan2.8. The results of this study will be used to improve the evaluation and display of benchmark performance. We highly encourage anyone working on MHC binding predictions to participate in this benchmark to get an unbiased evaluation of their predictors.
科研通智能强力驱动
Strongly Powered by AbleSci AI