计算机科学
强化学习
压缩(物理)
血流
人工智能
医学
材料科学
心脏病学
复合材料
作者
Iara Santelices,Cederick LandryMember,Arash AramiMember,Sean D. Peterson
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-9
标识
DOI:10.1109/jbhi.2024.3423698
摘要
Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing sub7 jects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired out19 come and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% ± 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes ± 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop.
科研通智能强力驱动
Strongly Powered by AbleSci AI