期刊:IEEE Sensors Journal [Institute of Electrical and Electronics Engineers] 日期:2024-01-11卷期号:24 (5): 6482-6489
标识
DOI:10.1109/jsen.2023.3348514
摘要
The electronic nose (E-nose) is of great importance in the field of gas detection. The detection tasks for mixed gases can usually be divided into two kinds: gas classification and concentration prediction. However, these two tasks are usually regarded as independent subtasks in the traditional E-nose pattern recognition algorithm; we usually perform gas classification first and then concentration prediction if both tasks want to be performed. This serial processing is relatively inefficient, and incorrect gas recognition results can also lead to inaccurate concentration prediction results. In this article, a multitask algorithmic model for concentration prediction and gas classification simultaneously based on a lightweight transformer encoder (MTL-Trans) was proposed. The model uses a single layer of transformer encoder to perform feature extraction using a self-attentive mechanism and then downscales the encoder output through the global averaging pooling layer to capture the global feature information among the E-nose data sequences. The extracted features are then used for parallel processing of both gas classification and concentration prediction tasks so that the response data of the E-nose can be processed efficiently. To optimize the model performance, the hyperparameters are deeply analyzed and explored in this study. Multiple sets of comparison experiments are conducted on the UCI public dataset to evaluate the model performance. The experimental results show that the proposed MTL-Trans can effectively achieve the collaborative training of gas concentration prediction and classification simultaneously with good performance (Acc.: 98.5%, CO/RMSE: 23.8, Eth/RMSE: 2.23, ${R} ^{{{2}}}$ : 0.94).