姿势
人工智能
计算机科学
计算机视觉
帧(网络)
可扩展性
RGB颜色模型
光学(聚焦)
数据库
电信
光学
物理
作者
Dominic Roberts,Wilfredo Torres Calderon,Shuai Tang,Mani Golparvar‐Fard
出处
期刊:Journal of Computing in Civil Engineering
[American Society of Civil Engineers]
日期:2020-04-18
卷期号:34 (4)
被引量:74
标识
DOI:10.1061/(asce)cp.1943-5487.0000898
摘要
Activity analysis of construction resources is generally performed by manually observing construction operations either in person or through recorded videos. It is thus prone to observer fatigue and bias and is of limited scalability and cost-effectiveness. Automating this procedure obviates these issues and can allow project teams to focus on performance improvement. This paper introduces a novel deep learning– and vision-based activity analysis framework that estimates and tracks two-dimensional (2D) worker pose and outputs per-frame worker activity labels given input red-green-blue (RGB) video footage of a construction worker operation. We used 317 annotated videos of bricklaying and plastering operations to train and validate the proposed method. This method obtained 82.6% mean average precision (mAP) for pose estimation and 72.6% multiple-object tracking accuracy (MOTA), and 81.3% multiple-object tracking precision (MOTP) for pose tracking. Cross-validation activity analysis accuracy of 78.5% was also obtained. We show that worker pose contributes to activity analysis results. This highlights the potential for using vision-based ergonomics assessment methods that rely on pose in conjunction with the proposed method for assessing the ergonomic viability of individual activities.
科研通智能强力驱动
Strongly Powered by AbleSci AI