部分可观测马尔可夫决策过程
计算机科学
路径(计算)
运筹学
马尔可夫决策过程
持续时间(音乐)
贪婪算法
马尔可夫链
难民
过程(计算)
数学优化
马尔可夫过程
马尔可夫模型
机器学习
数学
政治学
统计
法学
算法
计算机网络
文学类
操作系统
艺术
作者
Raissa Zurli Bittencourt Bravo,Adriana Leiras,Fernando Luiz Cyrino Oliveira
摘要
Researchers have proposed the use of unmanned aerial vehicles (UAVs) in humanitarian relief to search for victims in disaster‐affected areas. Once UAVs must search through the entire affected area to find victims, the path‐planning operation becomes equivalent to an area coverage problem. In this study, we propose an innovative method for solving such problem based on a Partially Observable Markov Decision Process (POMDP), which considers the observations made from UAVs. The formulation of the UAV path planning is based on the idea of assigning higher priorities to the areas that are more likely to have victims. We applied the method to three illustrative cases, considering different types of disasters: a tornado in Brazil, a refugee camp in South Sudan, and a nuclear accident in Fukushima, Japan. The results demonstrated that the POMDP solution achieves full coverage of disaster‐affected areas within a reasonable time span. We evaluate the traveled distance and the operation duration (which were quite stable), as well as the time required to find groups of victims by a detailed multivariate sensitivity analysis. The comparisons with a Greedy Algorithm showed that the POMDP finds victims more quickly, which is the priority in humanitarian relief, whereas the performance of the Greedy focuses on minimizing the traveled distance. We also discuss the ethical, legal, and social acceptance issues that can influence the application of the proposed methodology in practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI