作者
Ning Li,Xingjiang Chen,Yanghe Feng,Jincai Huan
摘要
Human–computer interaction cognitive behavior (HCICB) modeling faces four deficiencies: 1) lack of a standard framework model; 2) large simulation error; 3) single simulation dimension; and 4) lack of a simulation software. To solve these deficiencies, we have carried out work in four aspects. First, we construct an HCICB model with the user, system device, and environment as the core elements, which provides a unified framework for the subsequent HCICB modeling in the Military Internet of Things (MIoT) command and control (C2) system. Second, we correct visual and motion parameters in the adaptive control of thought rational module of the Cogtool model by the commander in the loop (CIL) experiment. Third, we construct a mental workload (MW) prediction model of the MIoT C2 system based on improved visual auditory cognitive psychomotor, which realizes fast, high-precision, and quantitative MW prediction. It is added as a simulation dimension for the HCICB. Fourth, we develop MwCogtool, an HCICB prediction software that can rapidly simulate typical tasks at the design and usage stages of the MIoT C2 system, and also can output six parameters, including task completion time (TCT), MW, eye movement prepare time, eye movement execution time, motion time, and cognitive time in the whole process quickly and visually. In addition, we select 20 real users and 9 typical tasks of the MIoT C2 system to carry out the CIL verification experiment. Compared with Cogtool, MwCogtool reduces the maximum simulation error in TCT of the C2 system from 45.00% to 5.58%. The consistency of simulation results with real user data reaches 0.99. The results of the MW prediction model can significantly and negatively predict the change of real users’ eye movement, and can accurately predict the trend of MW change. Simultaneously, we build a fitting model between the mean MW prediction value and eye movement parameters.