脱离理论
道德责任
道德解脱
问责
公共关系
社会责任
医疗保健
心理学
互联网隐私
社会心理学
政治学
计算机科学
医学
法学
老年学
作者
Ariadne A. Nichol,Meghan C. Halley,Carole A. Federico,Mildred K. Cho,Pamela Sankar
出处
期刊:Biocomputing
日期:2022-11-01
卷期号:: 496-506
被引量:5
标识
DOI:10.1142/9789811270611_0045
摘要
Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regulators have proposed an approach that would shift more responsibility to MLPA developers for mitigating potential harms. To be effective, this approach requires MLPA developers to recognize, accept, and act on responsibility for mitigating harms. In interviews of 40 MLPA developers of health care applications in the United States, we found that a subset of ML developers made statements reflecting moral disengagement, representing several different potential rationales that could create distance between personal accountability and harms. However, we also found a different subset of ML developers who expressed recognition of their role in creating potential hazards, the moral weight of their design decisions, and a sense of responsibility for mitigating harms. We also found evidence of moral conflict and uncertainty about responsibility for averting harms as an individual developer working in a company. These findings suggest possible facilitators and barriers to the development of ethical ML that could act through encouragement of moral engagement or discouragement of moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.
科研通智能强力驱动
Strongly Powered by AbleSci AI