Accurate 3D Reconstruction of Dynamic Objects by Spatial-Temporal Multiplexing and Motion-Induced Error Elimination

计算机视觉 人工智能 计算机科学 像素 运动估计 迭代重建 稳健性(进化) 模式识别(心理学) 数学 生物化学 化学 基因
作者
Congying Sui,Kejing He,Congyi Lyu,Yunhui Liu
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:31: 2106-2121 被引量:14
标识
DOI:10.1109/tip.2022.3150297
摘要

Three-dimensional (3D) reconstruction of dynamic objects has broad applications, including object recognition and robotic manipulation. However, achieving high-accuracy reconstruction and robustness to motion simultaneously is a challenging task. In this paper, we present a novel method for 3D reconstruction of dynamic objectS, whose main features are as follows. Firstly, a structured-light multiplexing method is developed that only requires 3 patterns to achieve high-accuracy encoding. Fewer projected patterns require shorter image acquisition time, thus, the object motion is reduced in each reconstruction cycle. The three patterns, i.e. spatial-temporally encoded patterns, are generated by embedding a specifically designed spatial-coded texture map into the temporal-encoded three-step phase-shifting fringes. A temporal codeword and three spatial codewords are extracted from the composite patterns using a proposed extraction algorithm. The two types of codewords are utilized separately in stereo matching: the temporal codeword ensures the high accuracy, while the spatial codewords are responsible for removing phase ambiguity. Secondly, we aim to eliminate the reconstruction error induced by motion between frames abbreviated as motion induced error (MiE). Instead of assuming the object to be static when acquiring the 3 images, we derive the motion of projection pixels among frames. Using the extracted spatial codewords, correspondences between different frames are found, i.e. pixels with the same codewords are traceable in the image sequences. Therefore, we can obtain the phase map at each image-acquisition moment without being affected by the object motion. Then the object surfaces corresponding to all the images can be recovered. Experimental results validate the high reconstruction accuracy and precision of the proposed method for dynamic objects with different motion speeds. Comparative experiments show that the presented method demonstrates superior performance with various types of motion, including translation in different directions and deformation.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI

祝大家在新的一年里科研腾飞
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
怡然雨雪完成签到,获得积分10
刚刚
Toby完成签到 ,获得积分10
1秒前
科研通AI2S应助清风荷影采纳,获得10
1秒前
爱笑梦易完成签到,获得积分10
3秒前
4秒前
小马哥完成签到,获得积分10
4秒前
Aries完成签到 ,获得积分10
4秒前
窦香菱完成签到,获得积分10
6秒前
务实冷风完成签到,获得积分10
7秒前
7秒前
9秒前
10秒前
12秒前
miaomiao发布了新的文献求助10
13秒前
14秒前
韩哈哈发布了新的文献求助10
14秒前
vicissitude发布了新的文献求助10
14秒前
yuhang发布了新的文献求助10
15秒前
17秒前
Maliketh应助活力的天空采纳,获得10
18秒前
18秒前
55lj发布了新的文献求助30
19秒前
CipherSage应助隐形的千秋采纳,获得30
20秒前
义气笑容完成签到,获得积分10
20秒前
刘晨阳发布了新的文献求助10
21秒前
fgh完成签到 ,获得积分10
21秒前
加油发布了新的文献求助10
22秒前
22秒前
台风眼发布了新的文献求助10
23秒前
23秒前
25秒前
吉如天完成签到,获得积分10
25秒前
夏虫发布了新的文献求助10
26秒前
TWO宝发布了新的文献求助10
26秒前
深情安青应助壮观以松采纳,获得10
28秒前
29秒前
今后应助神农采纳,获得10
29秒前
29秒前
zzzz完成签到 ,获得积分10
30秒前
冯冯完成签到 ,获得积分10
31秒前
高分求助中
Востребованный временем 2500
The Three Stars Each: The Astrolabes and Related Texts 1500
Agenda-setting and journalistic translation: The New York Times in English, Spanish and Chinese 1000
Les Mantodea de Guyane 1000
Very-high-order BVD Schemes Using β-variable THINC Method 950
Field Guide to Insects of South Africa 660
Foucault's Technologies Another Way of Cutting Reality 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3391348
求助须知:如何正确求助?哪些是违规求助? 3002523
关于积分的说明 8804264
捐赠科研通 2689105
什么是DOI,文献DOI怎么找? 1472917
科研通“疑难数据库(出版商)”最低求助积分说明 681272
邀请新用户注册赠送积分活动 674144