计算机科学
人工智能
特征(语言学)
步态
运动学
模式识别(心理学)
分类器(UML)
计算机视觉
数据挖掘
机器学习
生理学
哲学
语言学
物理
经典力学
生物
作者
M. Amsaprabhaa,Y. Nancy Jane,H. Khanna Nehemiah
标识
DOI:10.1016/j.eswa.2022.118681
摘要
Fall happens when a person's movement coordination is disturbed, forcing them to rest on the ground unintentionally causing serious health risks. The objective of this work is to develop a Multimodal SpatioTemporal Skeletal Kinematic Gait Feature Fusion (MSTSK-GFF) classifier for detecting fall using video data. The walking pattern of an individual is referred to as gait. The event of fall recorded in video shows discrepancies and irregularities in gait patterns. Analysis of these patterns plays a vital role in the identification of fall risk. However, assessment of the gait patterns from video data remains challenging due to its spatial and temporal feature dependencies. The proposed MSTSK-GFF framework presents a multimodal feature fusion process that overcomes these challenges and generates two sets of spatiotemporal kinematic gait features using SpatioTemporal Graph Convolution Network (STGCN) and 1D-CNN network model. These two generated feature sets are combined using concatenative feature fusion process and classification model is constucted for detecting fall. For optimizing the network weights, a bio-inspired spotted hyena optimizer is applied during training process. Finally, performance of the classification model is evaluated and compared to detect fall in videos. The proposed work is experimented with the two vision-based fall datasets namely, UR Fall Detection (URFD) dataset and self-build dataset. The experimental outcome proves the effectiveness of MSTSK-GFF in terms of its classification accuracy of 96.53% and 95.80% with two datasets when compared with existing state-of-the-art techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI