医疗保健
人工智能
计算机科学
重症监护医学
医学
经济
经济增长
作者
Richard J. Chen,Judy J. Wang,Drew F. K. Williamson,Tiffany Chen,Jana Lipková,Ming Y. Lu,Sharifa Sahai,Faisal Mahmood
标识
DOI:10.1038/s41551-023-01056-8
摘要
In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device. This Perspective discusses fairness in machine learning and how algorithmic biases arising in clinical workflows can cause healthcare disparities.
科研通智能强力驱动
Strongly Powered by AbleSci AI