人机交互
计算机科学
计算机视觉
人工智能
工程类
系统工程
作者
Tian Wang,Junming Fan,Pai Zheng
标识
DOI:10.1016/j.jmsy.2024.04.020
摘要
Industry 5.0 prioritizes Human-centric Smart Manufacturing (HSM), aiming to enhance human operators' well-being and needs. This necessitates collaborative robots with advanced natural interaction capabilities and improved perception, cognition, and action intelligence. The Large Language Model (LLM) exhibits strong reasoning abilities and generalization capabilities, which can significantly advance the development of HSM once integrated into human–robot interaction and collaboration. Accordingly, this paper explores part of LLM's ability in the context of smart manufacturing, focusing on addressing interruptions in the manufacturing process caused by repetitive tool fetching. To alleviate this issue, a vision and language cobot navigation approach is innovatively adopted in the manufacturing environment that could be further used to assist operators in retrieving tools. Specifically, a real Human–Robot Collaboration (HRC) manufacturing scene is first reconstructed and annotated using Three-Dimensional (3D) point cloud techniques. Then the LLM is utilized for the Automated Guided Vehicle (AGV) to comprehend natural language commands and generate Python code to triger navigation actions. Finally, the Pathfinder algorithm is applied for corresponding path planning. The framework is implemented in the Artificial Intelligence (AI) Habitat simulator, and the case studies demonstrate that the AGV can accurately comprehend complex language instructions, empowering human operators to complete manufacturing tasks efficiently.
科研通智能强力驱动
Strongly Powered by AbleSci AI