Vision Sensor for Automatic Recognition of Human Activities via Hybrid Features and Multi-Class Support Vector Machine

水准点(测量) 计算机科学 支持向量机 人工智能 机器学习 活动识别 一般化 特征(语言学) 特征提取 任务(项目管理) 鉴定(生物学) 班级(哲学) 精确性和召回率 手势 模式识别(心理学) 数据挖掘 工程类 数学分析 语言学 哲学 植物 数学 大地测量学 系统工程 生物 地理
作者
Saleha Kamal,Haifa F. Alhasson,Mohammed Alnusayri,Mohammed Alatiyyah,Hanan Aljuaid,Ahmad Jalal,Hui Liu
出处
期刊:Sensors [Multidisciplinary Digital Publishing Institute]
卷期号:25 (1): 200-200 被引量:1
标识
DOI:10.3390/s25010200
摘要

Over recent years, automated Human Activity Recognition (HAR) has been an area of concern for many researchers due to its widespread application in surveillance systems, healthcare environments, and many more. This has led researchers to develop coherent and robust systems that efficiently perform HAR. Although there have been many efficient systems developed to date, still, there are many issues to be addressed. There are several elements that contribute to the complexity of the task, making it more challenging to detect human activities, i.e., (i) poor lightning conditions; (ii) different viewing angles; (iii) intricate clothing styles; (iv) diverse activities with similar gestures; and (v) limited availability of large datasets. However, through effective feature extraction, we can develop resilient systems for higher accuracies. During feature extraction, we aim to extract unique key body points and full-body features that exhibit distinct attributes for each activity. Our proposed system introduces an innovative approach for the identification of human activity in outdoor and indoor settings by extracting effective spatio-temporal features, along with a Multi-Class Support Vector Machine, which enhances the model’s performance to accurately identify the activity classes. The experimental findings show that our model outperforms others in terms of classification, accuracy, and generalization, indicating its efficient analysis on benchmark datasets. Various performance metrics, including mean recognition accuracy, precision, F1 score, and recall assess the effectiveness of our model. The assessment findings show a remarkable recognition rate of around 88.61%, 87.33, 86.5%, and 81.25% on the BIT-Interaction dataset, UT-Interaction dataset, NTU RGB + D 120 dataset, and PKUMMD dataset, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1459发布了新的文献求助10
1秒前
1秒前
心海发布了新的文献求助10
1秒前
KAI完成签到,获得积分10
2秒前
2秒前
王大菜完成签到,获得积分10
2秒前
2秒前
2秒前
实验好难应助无奈抽屉采纳,获得10
3秒前
无私水卉发布了新的文献求助10
3秒前
任我行发布了新的文献求助10
3秒前
rorolinlin完成签到,获得积分10
4秒前
4秒前
4秒前
萍子发布了新的文献求助10
4秒前
缓慢夜阑完成签到,获得积分10
4秒前
XL发布了新的文献求助10
5秒前
6秒前
6秒前
罗健完成签到 ,获得积分10
6秒前
夏xia发布了新的文献求助10
6秒前
大个应助tough采纳,获得30
6秒前
7秒前
幽默尔蓝发布了新的文献求助30
7秒前
元谷雪发布了新的文献求助200
8秒前
研友_VZG7GZ应助LDDD采纳,获得10
8秒前
zwxzwx发布了新的文献求助10
8秒前
caohuijun完成签到,获得积分10
9秒前
Hezam完成签到,获得积分20
9秒前
科研通AI5应助岳苏佳采纳,获得10
10秒前
彭于彦祖应助无私水卉采纳,获得20
10秒前
10秒前
10秒前
10秒前
SYLH应助你怎么睡得着觉采纳,获得10
10秒前
10秒前
11秒前
小羊咩咩发布了新的文献求助10
11秒前
11秒前
Cheryy发布了新的文献求助10
12秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Production Logging: Theoretical and Interpretive Elements 3000
CRC Handbook of Chemistry and Physics 104th edition 1000
Density Functional Theory: A Practical Introduction, 2nd Edition 840
J'AI COMBATTU POUR MAO // ANNA WANG 660
Izeltabart tapatansine - AdisInsight 600
Gay and Lesbian Asia 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3756055
求助须知:如何正确求助?哪些是违规求助? 3299291
关于积分的说明 10109581
捐赠科研通 3013845
什么是DOI,文献DOI怎么找? 1655326
邀请新用户注册赠送积分活动 789704
科研通“疑难数据库(出版商)”最低求助积分说明 753361