计算机科学
医疗保健
互联网隐私
计算机安全
信息隐私
经济
经济增长
作者
Xinrong Gong,Jie Gao,Song Min Sun,Zhijie Zhong,Yifan Shi,Huanqiang Zeng,Kaixiang Yang
标识
DOI:10.1109/jbhi.2025.3558935
摘要
The emergence of large language models (LLMs) has been a key enabler of technological innovation in healthcare. People can conveniently obtain a more accurate medical consultation service by utilizing LLMs' powerful knowledge inference capability. However, existing LLMs require users to upload explicit requests during remote healthcare consultations, which involves the risk of exposing personal privacy. Furthermore, the reliability of the response content generated by LLMs is not guaranteed. To tackle the above challenges, this paper proposes a novel privacy-preserving LLM for user-activated health, called Adaptive Compressed-based Privacy-preserving LLM (ACP2LLM). Specifically, an adaptive token compression method based on information entropy is carefully designed to ensure that ACP2LLM can preserve user-sensitive information when invoking the medical consultation of LLMs deployed on the cloud platform. Moreover, a multi-doctor one-chief physician mechanism is proposed to rationally split and collaboratively infer the patients' requests to achieve the privacy-utility trade-off. Notably, the proposed ACP2LLM also provides highly competitive performance in various token compression rates. Extensive experiments on multiple Medical Question and Answers datasets demonstrate that the proposed ACP2LLM has strong privacy protection capabilities and high answer precision, outperforming current state-of-the-art LLM methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI