融合
质量(理念)
计算机视觉
计算机科学
传感器融合
人工智能
可视化
质量评定
材料科学
工程类
可靠性工程
物理
评价方法
哲学
语言学
量子力学
作者
Shuchang Xu,Haohao Xu,Fangtao Mao,Wen Su,Menghui Ji,Haiyong Gan,Wenzhen Yang
标识
DOI:10.1109/tim.2024.3386205
摘要
Although vision-based defect detection has been widely applied in the industry, vision information alone is still insufficient for certain flexible materials like leather, as their quality control may involve haptic properties. This paper introduces a solution based on the fusion of vision and tactile data to evaluate the overall quality of flexible materials, such as leather. Firstly, a prototype equipment for capturing visual and tactile data is introduced. Specifically, the tactile data consists of XYZ-axis three components and is interpolated and visualized to align with the visual data. The aligned tactile data is then combined with the visual data and the derived defect segmentation map to form a cross-modal dataset. Additionally, a DeepLabv3+ derived network, which can be considered as a two-stream network with cross attention between visual and tactile data, is proposed to detect surface and non-surface defects. Experiments demonstrate that the proposed network exhibits a significant improvement of 10% compared to the vision-only network. The ablation study also examines the performance comparison of different fusion strategies. Furthermore, a grading strategy that integrates defect detection with haptic properties is finally introduced to assess the quality of flexible materials.
科研通智能强力驱动
Strongly Powered by AbleSci AI