非线性系统
深度学习
人工智能
超声波传感器
时频分析
计算机科学
傅里叶变换
鉴定(生物学)
模式识别(心理学)
连续小波变换
小波变换
声学
小波
计算机视觉
数学
离散小波变换
数学分析
物理
植物
滤波器(信号处理)
量子力学
生物
作者
Jianfeng Liu,Kui Wang,Mingjie Zhao,Yongjiang Chen
标识
DOI:10.1080/10589759.2023.2250513
摘要
ABSTRACTBy combining time-frequency images and deep learning models, the nonlinear ultrasound signals can be classified, detected, and predicted, using the nonlinear coefficient as a fundamental label for training deep learning models. This integrated approach enables quantitative identification and real-time monitoring of concrete damage, promoting the widespread adoption of nonlinear ultrasonic techniques in engineering applications. As a basis, the relationship between damage variations and nonlinear coefficients is discussed by performing nonlinear ultrasonic damage testing on concrete specimens with different crack lengths and angles. The testing signals are converted into time-frequency images using the short-time Fourier transform and the continuous wavelet transform, and both types of images are combined for data augmentation and input into the deep learning model for training, with nonlinear coefficients serving as labels for the time-frequency images. The MobileNetV2, VGG16, and ResNet18 deep learning models are trained separately on time-frequency image datasets for the length specimens, the angle specimens, and the length-angle specimens, and the performance of the different models is evaluated and compared. The results show that all three models have accuracy rates above 94%, indicating good identification performance. Finally, with the example, the nonlinear coefficients of the testing signals are compared with the labels of the nonlinear coefficients in the time-frequency images identified by the deep learning model, which confirms the high accuracy of damage identification by the deep learning model.KEYWORDS: Time-frequency imagedeep learningnonlinear ultrasoundnonlinear coefficientconcrete AcknowledgmentsThe authors appreciate everyone who have contributed to the completion of this study.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThe research is funded by the Scientific and Technological Research Program of Chongqing Municipal Education Commission (Grant No. KJZD-K202100705), the Chongqing Talent Program "Package System" Project (Grant No. cstc2022ycjh-bgzxm0080), the Chongqing Water Conservancy Science and Technology Project (Grant No. CQSLK-2022002) and the Research and Innovation Program for Graduate Students in Chongqing (Grant No. CYB22236).
科研通智能强力驱动
Strongly Powered by AbleSci AI