计算机科学
编码器
机器学习
人工智能
变压器
归纳偏置
特征学习
图形
任务分析
健康档案
训练集
任务(项目管理)
多任务学习
医疗保健
理论计算机科学
操作系统
物理
量子力学
经济
电压
管理
经济增长
作者
Junyuan Shang,Tengfei Ma,Cao Xiao,Jimeng Sun
标识
DOI:10.24963/ijcai.2019/825
摘要
Medication recommendation is an important healthcare application. It is commonly formulated as a temporal prediction task. Hence, most existing works only utilize longitudinal electronic health records (EHRs) from a small number of patients with multiple visits ignoring a large number of patients with a single visit (selection bias). Moreover, important hierarchical knowledge such as diagnosis hierarchy is not leveraged in the representation learning process. Despite the success of deep learning techniques in computational phenotyping, most previous approaches have two limitations: task-oriented representation and ignoring hierarchies of medical codes. To address these challenges, we propose G-BERT, a new model to combine the power of Graph Neural Networks (GNNs) and BERT (Bidirectional Encoder Representations from Transformers) for medical code representation and medication recommendation. We use GNNs to represent the internal hierarchical structures of medical codes. Then we integrate the GNN representation into a transformer-based visit encoder and pre-train it on EHR data from patients only with a single visit. The pre-trained visit encoder and representation are then fine-tuned for downstream predictive tasks on longitudinal EHRs from patients with multiple visits. G-BERT is the first to bring the language model pre-training schema into the healthcare domain and it achieved state-of-the-art performance on the medication recommendation task.
科研通智能强力驱动
Strongly Powered by AbleSci AI