计算机科学
人工智能
任务(项目管理)
深度学习
动作识别
特征(语言学)
特征提取
动作(物理)
国家(计算机科学)
机器学习
任务分析
计算机视觉
班级(哲学)
工程类
哲学
系统工程
物理
量子力学
语言学
算法
作者
Dinh‐Tan Pham,Van-Nam Hoang,Viet-Duc Le,Tien‐Thanh Nguyen,Thanh-Hai Tran,Hai Vu,Van-Hung Le,Thi‐Lan Le
标识
DOI:10.1109/icce55644.2022.9852103
摘要
Human action recognition (HAR) is an important task for UAVs for instant decision-making from captured videos. HAR for UAVs is a challenging task due to the UAVs' motion, attitudes, and view changes during flight. Moreover, UAVs' video sequences may suffer from blurs and low resolution. All these issues cause difficulty in HAR for UAVs, necessitating the quest for the HAR method that considers UAV data characteristics. In this paper, we revisit some state-of-the-art deep learning methods and evaluate their performance on the UAV-Human dataset- the largest public UAV dataset up to now. Based on the evaluation, we propose a new framework that combines AAGCN and MS-G3D through a Feature Fusion module for data pre-processing in all streams. Experimental results show that our proposed method outperforms state-of-the-art methods on the UAV-Human dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI