坐
人工智能
计算机科学
机器学习
深度学习
多模式学习
任务(项目管理)
模态(人机交互)
工程类
医学
系统工程
病理
作者
Xiangying Zhang,Junming Fan,Tao Peng,Pai Zheng,Xujun Zhang,Renzhong Tang
标识
DOI:10.1016/j.sna.2022.114150
摘要
Recognizing sitting posture is significant to prevent the development of work-related musculoskeletal disorders for office workers. Multimodal data, i.e., infrared map and pressure map, have been leveraged to achieve accurate recognition while preserving privacy and being unobtrusive for daily use. Existing studies in sitting posture recognition utilize handcrafted features with machine learning models for multimodal data fusion, which significantly relies on domain knowledge. Therefore, a deep learning model is proposed to fuse the multimodal data and recognize the sitting posture. This model contains modality-specific backbones, a cross-modal self-attention module, and multi-task learning-based classification. Experiments are conducted to verify the effectiveness of the proposed model using 20 participants’ data, achieving a 93.08% F1-score. The high-performance result indicates that the proposed model is promising for sitting posture-related applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI