计算机科学
计算机视觉
惯性测量装置
卷积神经网络
可穿戴计算机
障碍物
人工智能
感知
避障
人机交互
RGB颜色模型
嵌入式系统
机器人
移动机器人
神经科学
政治学
法学
生物
作者
Jinqiang Bai,Zhaoxiang Liu,Yimin Lin,Ye Li,Shiguo Lian,Dijun Liu
出处
期刊:Electronics
[MDPI AG]
日期:2019-06-20
卷期号:8 (6): 697-697
被引量:86
标识
DOI:10.3390/electronics8060697
摘要
Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.
科研通智能强力驱动
Strongly Powered by AbleSci AI