Real-Time 3D Reconstruction Method Based on Monocular Vision

人工智能 计算机视觉 计算机科学 三维重建 单眼 过程(计算) 点云 可视外壳 离群值 RGB颜色模型 迭代重建 操作系统
作者
Qingyu Jia,Liang Chang,Baohua Qiang,Shihao Zhang,Wu Xie,Xianyi Yang,Yangchang Sun,Minghao Yang
出处
期刊:Sensors [MDPI AG]
卷期号:21 (17): 5909-5909 被引量:7
标识
DOI:10.3390/s21175909
摘要

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
ZZZ发布了新的文献求助10
刚刚
1秒前
狼啸天应助Richard采纳,获得10
1秒前
善学以致用应助AOtaku采纳,获得10
1秒前
Owen应助小马采纳,获得10
2秒前
听海发布了新的文献求助10
2秒前
高金龙完成签到 ,获得积分10
2秒前
科研通AI5应助bjbmtxy采纳,获得30
3秒前
李伟发布了新的文献求助10
3秒前
六六大顺完成签到,获得积分10
3秒前
小白完成签到 ,获得积分10
3秒前
所所应助zed采纳,获得10
3秒前
adgcxvjj应助wxyllxx采纳,获得10
5秒前
5秒前
Re完成签到,获得积分10
5秒前
科研通AI2S应助dhbt采纳,获得10
7秒前
慕华完成签到 ,获得积分10
7秒前
太渊发布了新的文献求助10
8秒前
9秒前
要减肥的乐双完成签到 ,获得积分10
9秒前
搜集达人应助核桃采纳,获得10
12秒前
13秒前
13秒前
14秒前
NexusExplorer应助小慧儿采纳,获得10
15秒前
16秒前
听海发布了新的文献求助10
16秒前
17秒前
英姑应助caizhiwei采纳,获得10
17秒前
搜集达人应助认真的豌豆采纳,获得10
17秒前
Lucas应助科研通管家采纳,获得10
18秒前
小蘑菇应助科研通管家采纳,获得20
18秒前
科研通AI5应助科研通管家采纳,获得10
18秒前
赘婿应助科研通管家采纳,获得10
18秒前
共享精神应助科研通管家采纳,获得10
18秒前
酷波er应助科研通管家采纳,获得10
18秒前
李爱国应助科研通管家采纳,获得30
18秒前
18秒前
李健的粉丝团团长应助CCR采纳,获得30
18秒前
华仔应助科研通管家采纳,获得10
18秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Structural Load Modelling and Combination for Performance and Safety Evaluation 1000
Conference Record, IAS Annual Meeting 1977 720
電気学会論文誌D(産業応用部門誌), 141 巻, 11 号 510
Typology of Conditional Constructions 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3565922
求助须知:如何正确求助?哪些是违规求助? 3138683
关于积分的说明 9428454
捐赠科研通 2839408
什么是DOI,文献DOI怎么找? 1560695
邀请新用户注册赠送积分活动 729854
科研通“疑难数据库(出版商)”最低求助积分说明 717669