计算机科学
人工智能
社会化媒体
数据科学
医疗保健
机器学习
大数据
歪斜
稳健性(进化)
心理干预
审计
数据挖掘
心理学
万维网
政治学
电信
生物化学
化学
精神科
基因
法学
管理
经济
作者
Lidia Flores,SeungJun Kim,Sean D. Young
标识
DOI:10.1136/jme-2022-108875
摘要
Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described as the difference between the predictive values and true values within the modelling of an algorithm. Bias within algorithms may lead to inaccurate healthcare outcomes and exacerbate health disparities when results derived from these biased algorithms are applied to health interventions. Researchers who implement these algorithms must consider when and how bias may arise. This paper explores algorithmic biases as a result of data collection, labelling and modelling of NLP algorithms. Researchers have a role in ensuring that efforts towards combating bias are enforced, especially when drawing health conclusions derived from social media posts that are linguistically diverse. Through the implementation of open collaboration, auditing processes and the development of guidelines, researchers may be able to reduce bias and improve NLP algorithms that improve health surveillance.
科研通智能强力驱动
Strongly Powered by AbleSci AI