电子学报 ›› 2022, Vol. 50 ›› Issue (7): 1774-1792.DOI: 10.12263/DZXB.20211615
刘玉轩, 张力, 艾海滨, 许彪, 孙钰珊, 樊仲藜
收稿日期:
2021-12-03
修回日期:
2022-02-28
出版日期:
2022-07-25
作者简介:
基金资助:
LIU Yu-xuan, ZHANG Li, AI Hai-bin, XU Biao, SUN Yu-shan, FAN Zhong-li
Received:
2021-12-03
Revised:
2022-02-28
Online:
2022-07-25
Published:
2022-07-30
Supported by:
摘要:
光场相机利用二维影像同时记录空间中光线的位置和方向信息,能够恢复相机内部的光场,为三维重建提供了新的解决思路.围绕光场相机的三维重建问题,本文综述了光场数据获取手段,梳理和讨论了针对光场相机的标定算法,总结和分析了基于光场影像的三维深度信息恢复方法.在此基础上,介绍了当前主要的公开光场数据和算法.最后,展望了未来的研究方向,以期为后续研究者提供参考.
中图分类号:
刘玉轩, 张力, 艾海滨, 等. 光场相机三维重建研究进展与展望[J]. 电子学报, 2022, 50(7): 1774-1792.
Yu-xuan LIU, Li ZHANG, Hai-bin AI, et al. Progress and Prospect of 3D Reconstruction Based on Light Field Cameras[J]. Acta Electronica Sinica, 2022, 50(7): 1774-1792.
光场相机类别 | 方法 | 时间 | 实现 | 分辨率 | 拍照速度 |
---|---|---|---|---|---|
多传感器 | 文献[ | 2002 | 8×8 camera array | 320×240×8×8 | 15~20 fps |
文献[ | 2004 | 6×8 camera array | 320×240×6×8 | 15~20 fps | |
文献[ | 2005 | 10×10 camera array | 640×480×10×10 | 30 fps | |
文献[ | 2013 | 4×4 camera array | 1 000×750×4×4 | 25 fps | |
文献[ | 2015 | 5×5 camera array | 1 024×768×5×5 | 30 fps | |
时序 | 文献[ | 2002 | Gantry | 1 300×1 030×62×56 | ‒ |
文献[ | 2013 | Linear stage | 5 616×3 744×100 | ‒ | |
文献[ | 2008 | Programmable aperture | 3 039×2 014×5×5 | 0.5 s | |
掩膜 | 文献[ | 2007 | Attenuation mask | 228×181×9×9 | ‒ |
文献[ | 2013 | Attenuation mask | 480×270×5×5 | ‒ | |
微透镜阵列 | 文献[ | 2005 | MLA | 292×292×14×14 | 16 ms |
文献[ | 2014 | MLA | 625×434×15×15 | 1/(4 000) s | |
文献[ | 2006 | MLA | 120×120×17×17 | 1/15 s | |
文献[ | ‒ | Multi-Focus MLA | Depends on type | ‒ |
表1 光场获取方法小结
光场相机类别 | 方法 | 时间 | 实现 | 分辨率 | 拍照速度 |
---|---|---|---|---|---|
多传感器 | 文献[ | 2002 | 8×8 camera array | 320×240×8×8 | 15~20 fps |
文献[ | 2004 | 6×8 camera array | 320×240×6×8 | 15~20 fps | |
文献[ | 2005 | 10×10 camera array | 640×480×10×10 | 30 fps | |
文献[ | 2013 | 4×4 camera array | 1 000×750×4×4 | 25 fps | |
文献[ | 2015 | 5×5 camera array | 1 024×768×5×5 | 30 fps | |
时序 | 文献[ | 2002 | Gantry | 1 300×1 030×62×56 | ‒ |
文献[ | 2013 | Linear stage | 5 616×3 744×100 | ‒ | |
文献[ | 2008 | Programmable aperture | 3 039×2 014×5×5 | 0.5 s | |
掩膜 | 文献[ | 2007 | Attenuation mask | 228×181×9×9 | ‒ |
文献[ | 2013 | Attenuation mask | 480×270×5×5 | ‒ | |
微透镜阵列 | 文献[ | 2005 | MLA | 292×292×14×14 | 16 ms |
文献[ | 2014 | MLA | 625×434×15×15 | 1/(4 000) s | |
文献[ | 2006 | MLA | 120×120×17×17 | 1/15 s | |
文献[ | ‒ | Multi-Focus MLA | Depends on type | ‒ |
数据集/ 网格大小(毫米) | 评价标准 | 先解码后标定的方法 | 基于原始光场影像的标定方法 | |||
---|---|---|---|---|---|---|
DPW[ | MPC[ | BJW[ | SCC[ | LFC[ | ||
A/3.61 | PE(像素) | 0.228(18) | 0.220(18) | 0.730(5) | ‒ | 0.713(5) |
R-PE(毫米) | 0.082(18) | 0.081(18) | 0.218(5) | ‒ | 0.214(5) | |
B/3.61 | PE(像素) | 0.158(18) | 0.157(18) | 0.483(9) | 0.346(9) | 0.464(9) |
R-PE(毫米) | 0.059(18) | 0.057(18) | 0.147(9) | 0.097(9) | 0.142(9) | |
C/7.22 | PE(像素) | 0.195(18) | 0.175(18) | ‒ | ‒ | ‒ |
R-PE(毫米) | 0.130(18) | 0.112(18) | ‒ | ‒ | ‒ | |
D/7.22 | PE(像素) | 0.167(18) | 0.148(18) | ‒ | ‒ | ‒ |
R-PE(毫米) | 0.115(18) | 0.105(18) | ‒ | ‒ | ‒ | |
E/35.1 | PE(像素) | 0.336(18) | 0.273(18) | 0.284(15) | 0.686(15) | 0.224(15) |
R-PE(毫米) | 0.384(18) | 0.539(18) | 0.558(15) | 0.994(15) | 0.348(15) |
表2 几种典型光场标定算法在Dansereau等[61]提供的5组测试数据集上的标定结果
数据集/ 网格大小(毫米) | 评价标准 | 先解码后标定的方法 | 基于原始光场影像的标定方法 | |||
---|---|---|---|---|---|---|
DPW[ | MPC[ | BJW[ | SCC[ | LFC[ | ||
A/3.61 | PE(像素) | 0.228(18) | 0.220(18) | 0.730(5) | ‒ | 0.713(5) |
R-PE(毫米) | 0.082(18) | 0.081(18) | 0.218(5) | ‒ | 0.214(5) | |
B/3.61 | PE(像素) | 0.158(18) | 0.157(18) | 0.483(9) | 0.346(9) | 0.464(9) |
R-PE(毫米) | 0.059(18) | 0.057(18) | 0.147(9) | 0.097(9) | 0.142(9) | |
C/7.22 | PE(像素) | 0.195(18) | 0.175(18) | ‒ | ‒ | ‒ |
R-PE(毫米) | 0.130(18) | 0.112(18) | ‒ | ‒ | ‒ | |
D/7.22 | PE(像素) | 0.167(18) | 0.148(18) | ‒ | ‒ | ‒ |
R-PE(毫米) | 0.115(18) | 0.105(18) | ‒ | ‒ | ‒ | |
E/35.1 | PE(像素) | 0.336(18) | 0.273(18) | 0.284(15) | 0.686(15) | 0.224(15) |
R-PE(毫米) | 0.384(18) | 0.539(18) | 0.558(15) | 0.994(15) | 0.348(15) |
光场数据 | 类别 | MVP[ | OV[ | CAE[ | SPO[ | ||||
---|---|---|---|---|---|---|---|---|---|
RMSE | RMSEOCC | RMSE | RMSEOCC | RMSE | RMSEOCC | RMSE | RMSEOCC | ||
Buddha | Local | 4.388 | 12.135 | 0.769 | 7.412 | 0.434 | 7.323 | 3.947 | 11.647 |
Optimized | 1.147 | 12.380 | 0.990 | 13.238 | 0.628 | 11.626 | 0.386 | 8.398 | |
Mona | Local | 7.211 | 21.873 | 1.257 | 8.365 | 1.113 | 9.591 | 5.077 | 13.701 |
Optimized | 1.019 | 11.676 | 0.992 | 6.457 | 0.749 | 11.441 | 0.762 | 8.946 | |
Stilllife | Local | 0.676 | 2.661 | 0.430 | 2.088 | 0.147 | 1.741 | 2.690 | 2.531 |
Optimized | 0.533 | 2.249 | 0.403 | 1.947 | 0.124 | 1.782 | 0.135 | 1.566 | |
Papillon | Local | 4.301 | 12.173 | 0.846 | 4.517 | 0.540 | 4.878 | 2.250 | 6.987 |
Optimized | 2.086 | 8.539 | 0.288 | 2.977 | 0.216 | 3.982 | 0.205 | 2.793 | |
平均值 | Local | 4.144 | 12.211 | 0.826 | 5.596 | 0.559 | 5.883 | 3.491 | 8.717 |
Optimized | 1.196 | 8.711 | 0.675 | 6.155 | 0.429 | 7.208 | 0.372 | 5.426 |
表3 几种典型光场深度估计算法在Wanner等[93]提供的合成数据集上的定量实验结果
光场数据 | 类别 | MVP[ | OV[ | CAE[ | SPO[ | ||||
---|---|---|---|---|---|---|---|---|---|
RMSE | RMSEOCC | RMSE | RMSEOCC | RMSE | RMSEOCC | RMSE | RMSEOCC | ||
Buddha | Local | 4.388 | 12.135 | 0.769 | 7.412 | 0.434 | 7.323 | 3.947 | 11.647 |
Optimized | 1.147 | 12.380 | 0.990 | 13.238 | 0.628 | 11.626 | 0.386 | 8.398 | |
Mona | Local | 7.211 | 21.873 | 1.257 | 8.365 | 1.113 | 9.591 | 5.077 | 13.701 |
Optimized | 1.019 | 11.676 | 0.992 | 6.457 | 0.749 | 11.441 | 0.762 | 8.946 | |
Stilllife | Local | 0.676 | 2.661 | 0.430 | 2.088 | 0.147 | 1.741 | 2.690 | 2.531 |
Optimized | 0.533 | 2.249 | 0.403 | 1.947 | 0.124 | 1.782 | 0.135 | 1.566 | |
Papillon | Local | 4.301 | 12.173 | 0.846 | 4.517 | 0.540 | 4.878 | 2.250 | 6.987 |
Optimized | 2.086 | 8.539 | 0.288 | 2.977 | 0.216 | 3.982 | 0.205 | 2.793 | |
平均值 | Local | 4.144 | 12.211 | 0.826 | 5.596 | 0.559 | 5.883 | 3.491 | 8.717 |
Optimized | 1.196 | 8.711 | 0.675 | 6.155 | 0.429 | 7.208 | 0.372 | 5.426 |
数据集 | 获取方式 | 发布机构 | 时间 |
---|---|---|---|
The new Stanford LF archive[ | Multi-Camera Array; Lego gantry; Light field miroscope | Stanford University | 2008,2016,2019 |
The Stanford Lytro LF archive[ | Lytro Illum | ||
The Stanford Multiview LF[ | Multi-view LF captured by Lytro Illum | ||
MIT LF archive[ | Synthetic | Massachusetts Institute of Technology | 2011,2012,2013 |
4D LF dataset[ | Synthetic | Heidelberg University | 2013,2016 |
EPFL LF dataset[ | Lytro Illum | École Polytechnique Fédérale de Lausanne | 2016 |
SMART Dataset[ | Lytro Illum | University of Rome | 2016 |
Disney LFs [ | A DSLR camera on a motorized linear stage | Disney | 2013 |
UCB LF [ | Synthetic Lytro Illum | University of California,Berkeley(UCB) | 2015 |
DDFT [ | Lytro Illum | Technische Universität München(TUM) | 2018 |
LF and Stereo[ | Raytrix R5 Stereo camera | 2018 | |
INRIA LF dataset[ | Lytro Lytro Illum Raytrix R8 Synthetic | Institut national de recherche en informatique et en automatique(INRIA) | 2015, 2018, 2018, 2019 |
MLRD[ | Lytro Illum Raytrix R29 | Mid Sweden University | 2018 |
CPCD[ | Custom-built | University of Bedfordshire | 2016 |
表4 公开的光场数据小结
数据集 | 获取方式 | 发布机构 | 时间 |
---|---|---|---|
The new Stanford LF archive[ | Multi-Camera Array; Lego gantry; Light field miroscope | Stanford University | 2008,2016,2019 |
The Stanford Lytro LF archive[ | Lytro Illum | ||
The Stanford Multiview LF[ | Multi-view LF captured by Lytro Illum | ||
MIT LF archive[ | Synthetic | Massachusetts Institute of Technology | 2011,2012,2013 |
4D LF dataset[ | Synthetic | Heidelberg University | 2013,2016 |
EPFL LF dataset[ | Lytro Illum | École Polytechnique Fédérale de Lausanne | 2016 |
SMART Dataset[ | Lytro Illum | University of Rome | 2016 |
Disney LFs [ | A DSLR camera on a motorized linear stage | Disney | 2013 |
UCB LF [ | Synthetic Lytro Illum | University of California,Berkeley(UCB) | 2015 |
DDFT [ | Lytro Illum | Technische Universität München(TUM) | 2018 |
LF and Stereo[ | Raytrix R5 Stereo camera | 2018 | |
INRIA LF dataset[ | Lytro Lytro Illum Raytrix R8 Synthetic | Institut national de recherche en informatique et en automatique(INRIA) | 2015, 2018, 2018, 2019 |
MLRD[ | Lytro Illum Raytrix R29 | Mid Sweden University | 2018 |
CPCD[ | Custom-built | University of Bedfordshire | 2016 |
用途 | 方法 | 发布机构 | 时间 |
---|---|---|---|
光场相机标定 | 文献[ | University of Sydney | 2013 |
文献[ | Korea Advanced Institute of Science and Technology(KAIST) | 2017 | |
文献[ | 西北工业大学 上海科技大学 | 2019 | |
文献[ | University College London | 2017 | |
文献[ | 海军航空大学 | 2020 | |
光场影像三维深度恢复 | 文献[ | University of Delaware; Microsoft Reserach | 2014 |
文献[ | KAIST | 2015 | |
文献[ | UCB University of California,San Diego(UCSD) | 2015 | |
文献[ | Bina Nusantara University Inha University Seoul National University | 2018 | |
文献[ | 北京航空航天大学 University of Wisconsin-Milwaukee | 2016 | |
文献[ | Yonsei University KAIST | 2018 | |
文献[ | 台湾大学 | 2020 | |
MediaTek |
表5 与光场三维重建相关的开源算法小结
用途 | 方法 | 发布机构 | 时间 |
---|---|---|---|
光场相机标定 | 文献[ | University of Sydney | 2013 |
文献[ | Korea Advanced Institute of Science and Technology(KAIST) | 2017 | |
文献[ | 西北工业大学 上海科技大学 | 2019 | |
文献[ | University College London | 2017 | |
文献[ | 海军航空大学 | 2020 | |
光场影像三维深度恢复 | 文献[ | University of Delaware; Microsoft Reserach | 2014 |
文献[ | KAIST | 2015 | |
文献[ | UCB University of California,San Diego(UCSD) | 2015 | |
文献[ | Bina Nusantara University Inha University Seoul National University | 2018 | |
文献[ | 北京航空航天大学 University of Wisconsin-Milwaukee | 2016 | |
文献[ | Yonsei University KAIST | 2018 | |
文献[ | 台湾大学 | 2020 | |
MediaTek |
1 | 王之卓. 摄影测量原理[M]. 北京: 测绘出版社, 1990. |
WANGZhi-zuo. Principle of Photogrammetry[M]. Beijing: Publishing House of Surveying and Mapping, 1990. (in Chinese) | |
2 | 张祖勋, 张剑清, 张力. 数字摄影测量发展的机遇与挑战[J]. 武汉测绘科技大学学报, 2000, 25(1): 7-11. |
ZHANGZu-xun, ZHANGJian-qing, ZHANGLi. Opportunities and challenges for development of digital photogrammetry[J]. Journal of Wuhan Technical University of Surveying and Mapping(Wtusm), 2000, 25(1):7-11. (in Chinese) | |
3 | NEX F, REMONDINOF. UAV for 3D mapping applications: A review[J]. Applied Geomatics, 2014, 6(1): 1-15. |
4 | BRUNOF, BRUNOS, DE SENSIG, et al. From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition[J]. Journal of Cultural Heritage, 2010, 11(1): 42-49. |
5 | YANGM D, CHAOC F, HUANGK S, et al. Image-based 3D scene reconstruction and exploration in augmented reality[J]. Automation in Construction, 2013, 33: 48-60. |
6 | NUCHTERA, SURMANNH, HERTZBERGJ. Automatic model refinement for 3D reconstruction with mobile robots[C]//Fourth International Conference on 3-D Digital Imaging and Modeling, 2003.3DIM 2003. Banff, Canada: IEEE, 2003: 394-401. |
7 | SONGL M, LIX Y, YANGY G, et al. Structured-light based 3D reconstruction system for cultural relic packaging[J]. Sensors, 2018, 18(9): 2981. |
8 | JIANGD L, HUY X, YANS C, et al. Efficient 3D reconstruction for face recognition[J]. Pattern Recognition, 2005, 38(6): 787-798. |
9 | KHANU, YASINA U, ABIDM, et al. A methodological review of 3D reconstruction techniques in tomographic imaging[J]. Journal of Medical Systems, 2018, 42(10): 190. |
10 | KLEING, MURRAYD. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara, Japan: IEEE, 2007: 225-234. |
11 | DAVISONA J, REIDI D, MOLTONN D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 1052-1067. |
12 | MUR-ARTALR, MONTIELJ M M, TARDÓSJ D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. |
13 | MUR-ARTALR, TARDÓSJ D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. |
14 | CAMPOSC, ELVIRAR, RODRÍGUEZJ J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. |
15 | HESSW, KOHLERD, RAPPH, et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE International Conference on Robotics and Automation. Stockholm, Sweden: IEEE, 2016: 1271-1278. |
16 | ZHANGJ, SINGHS. LOAM: Lidar odometry and mapping in real-time[C]//Robotics: Science and Systems. Berkeley, California, USA: University of California, Berkeley, 2014, 2(9): 1-9. |
17 | YEH Y, CHENY Y, LIUM. Tightly coupled 3D lidar inertial odometry and mapping[C]//2019 International Conference on Robotics and Automation(ICRA). Montreal, Canada: IEEE, 2019: 3144-3150. |
18 | KOIDEK, MIURAJ, MENEGATTIE. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement[J]. International Journal of Advanced Robotic Systems, 2019, 16(2): 1729881419841532. |
19 | XUW, ZHANGF. FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3317-3324. |
20 | CHENX, MILIOTOA, PALAZZOLOE, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). Macau, China: IEEE, 2019: 4530-4537. |
21 | KERLC, STURMJ, CREMERSD. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan: IEEE, 2013: 2100-2106. |
22 | NEWCOMBER A, IZADIS, HILLIGESO, et al. KinectFusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. Basel, Switzerland: IEEE, 2011: 127-136. |
23 | WHELANT, LEUTENEGGERS, SALAS MORENOR, et al. ElasticFusion: Dense SLAM without a pose graph[C]//Robotics: Science and Systems. Rome, Italy: Sapienza University of Rome, 2015: 1-9. |
24 | DAIA, NIEßNERM, ZOLLHÖFERM, et al. BundleFusion: Real-time globally consistent 3D reconstruction using on-the-fly surface reintegration[J]. ACM Transactions on Graphics, 2017, 36(4): 76a. |
25 | SUZ, XUL, ZHENGZ R, ET AL. RobustFusion: Human Volumetric Capture with Data-driven Visual Cues Using A RGBD Camera[M]//Computer Vision-ECCV 2020. Switzerland: Springer, Cham, 2020: 246-264. |
26 | NG R. Light field photography with a hand-held plenoptic camera[D]. Palo Alto: Stanford University, 2005. |
27 | NG R. Fourier slice photography[J]. ACM Transactions on Graphics, 2005, 24(3): 735-744. |
28 | RAYTRIXGMBH. Raytrix|3d light field camera technology[EB/OL]. [2022]. . |
29 | VENKATARAMANK, LELESCUD, DUPARRÉJ, et al. PiCam: An ultra-thin high performance monolithic camera array[J]. ACM Transactions on Graphics, 2013, 32(6): 1-13. |
30 | 方璐, 戴琼海. 计算光场成像[J]. 光学学报, 2020, 40(1): 9-30. |
FANGLu, DAIQing-hai. Computational light field imaging[J]. Acta Optica Sinica, 2020, 40(1): 9-30. (in Chinese) | |
31 | WILLIEM, PARKI K, LEEK M. Robust light field depth estimation using occlusion-noise aware data costs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(10): 2484-2497. |
32 | ZELLERN, QUINTF, STILLAU. Scale-awareness of light field camera based visual odometry[C]//Proceedings of the European Conference on Computer Vision(ECCV). Munich, Germany: Springer, 2018: 715-730. |
33 | MITRAK, VEERARAGHAVANA. Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior[C]//2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, RI, USA: IEEE, 2012: 22-28. |
34 | 孙福盛, 韩燮. 光场相机精确色彩矢量约束下的超分辨率算法[J]. 光学学报, 2019, 39(3): 59-68. |
SUNFu-sheng, HANXie. Super-resolution algorithm based on precise color vector constraint of light field camera[J]. Acta Optica Sinica, 2019, 39(3): 59-68. (in Chinese) | |
35 | 刘亚美, 张骏, 张旭东, 等. 光场显著性检测研究综述[J]. 中国图象图形学报, 2020, 25(12): 2465-2483. |
LIUYa-mei, ZHANGJun, ZHANGXu-dong, et al. Review of saliency detection on light fields[J]. Journal of Image and Graphics, 2020, 25(12):2465-2483. (in Chinese) | |
36 | YÜCERK, SORKINE-HORNUNGA, WANGO, et al. Efficient 3D object segmentation from densely sampled light fields with applications to 3D reconstruction[J]. ACM Transactions on Graphics, 2016, 35(3): 1-15. |
37 | PEIZ, ZHANGY N, YANGT, et al. A novel multi-object detection method in complex scene using synthetic aperture imaging[J]. Pattern Recognition, 2012, 45(4): 1637-1658. |
38 | GUOX Q, YUZ, KANGS B, et al. Enhancing light fields through ray-space stitching[J]. IEEE Transactions on Visualization and Computer Graphics, 2016, 22(7): 1852-1861. |
39 | 刘玉轩. 光场相机三维重建方法研究[D]. 武汉:武汉大学, 2020. |
LIUYu-xuan. 3D Reconstruction Based on Light Field Cameras[D]. Wuhan: Wuhan University, 2020. (in Chinese) | |
40 | WUG C, MASIAB, JARABOA, et al. Light field image processing: An overview[J]. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(7): 926-954. |
41 | GERSHUNA. The light field[J]. Journal of Mathematics and Physics, 1939, 18(1/2/3/4): 51-151. |
42 | ADELSONE H, BERGENJ R. The plenoptic function and the elements of early vision[J]. Computational Models of Visual Processing, 1991, 3-20. |
43 | LEVOYM, HANRAHANP. Light field rendering[C]//Proceedings of the 23rd annual conference on Computer graphics and interactive techniques-SIGGRAPH'96. New York: ACM Press, 1996: 31-42. |
44 | JOHANNSENO, HONAUERK, GOLDLUECKEB, et al. A taxonomy and evaluation of dense light field depth estimation algorithms[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, HI, USA: IEEE, 2017: 1795-1812. |
45 | YANGJ C, EVERETTM, BUEHLERC, et al. A real-time distributed light field camera[J]. Rendering Techniques, 2002, 2002: 77-86. |
46 | WILBURNB S, SMULSKIM, LEEH H K, et al. Light field video camera[C]//Proceedings of Media Processors 2002. San Jose: SPIE, 2001, 4674: 29-36. |
47 | WILBURNB, JOSHIN, VAISHV, et al. High performance imaging using large camera arrays[J]. ACM Transactions on Graphics, 2005, 24(3): 765-776. |
48 | ZHANGC, CHENT. A self-reconfigurable camera array[C]//SIGGRAPH'04: ACM SIGGRAPH 2004 Sketches. New York: ACM Press, 2004: 151. |
49 | LINX, WUJ M, ZHENGG A, et al. Camera array based light field microscopy[J]. Biomedical Optics Express, 2015, 6(9): 3179-3189. |
50 | VAISHV, ADAMSA. Stanford(New) light field archive[EB/OL].[2022]. . |
51 | KIMC, ZIMMERH, PRITCHY, et al. Scene reconstruction from high spatio-angular resolution light fields[J]. ACM Transactions on Graphics, 2013, 32(4): 1-12. |
52 | LIANGC K, LINT H, WONGB Y, et al. Programmable aperture photography[J]. ACM Transactions on Graphics, 2008, 27(3): 1-10. |
53 | VEERARAGHAVANA, RASKARR, AGRAWALA, et al. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing[J]. ACM Transactions on Graphics, 2007, 26(3): 69. |
54 | MARWAHK, WETZSTEING, BANDOY, et al. Compressive light field photography using overcomplete dictionaries and optimized projections[J]. ACM Transactions on Graphics, 2013, 32(4): 46(1-12. |
55 | ADELSONE H, WANGJ Y A. Single lens stereo with a plenoptic camera[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 99-106. |
56 | DUANH X, MEIL, WANGJ, et al. A new imaging model of Lytro light field camera and its calibration[J]. Neurocomputing, 2019, 328: 189-194. |
57 | LUMSDAINEA, GEORGIEVT. The focused plenoptic camera[C]//2009 IEEE International Conference on Computational Photography. San Francisco: IEEE, 2009: 1-8. |
58 | LEVOYM, NG R, ADAMSA, et al. Light field microscopy[J]. ACM Transactions on Graphics, 2006, 25(3): 924-934. |
59 | SHEHZADK, XUY. Graphene light-field camera[J]. Nature Photonics, 2020, 14(3): 134-136. |
60 | PLESSR. Using many cameras as one[C]//2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. WI, USA: IEEE, 2003: II-587. |
61 | DANSEREAUD G, PIZARROO, WILLIAMSS B. Decoding, calibration and rectification for lenselet-based plenoptic cameras[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, Oregon, USA: IEEE, 2013: 1027-1034. |
62 | BOK Y, JEONH G, KWEONI S. Geometric calibration of micro-lens-based light field cameras using line features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2): 287-300. |
63 | ZHANGQ, ZHANGC P, LINGJ B, et al. A generic multi-projection-center model and calibration method for light field cameras[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(11): 2539-2552. |
64 | JIY, WUJ. Calibration method of light-field camera for photogrammetry application[J]. Measurement, 2019, 148: 106943. |
65 | PERTUZS, PULIDO-HERRERAE, KAMARAINENJ K. Focus model for metric depth estimation in standard plenoptic cameras[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 144: 38-47. |
66 | ZHOUP, YANGZ, CAIW J, et al. Light field calibration and 3D shape measurement based on epipolar-space[J]. Optics Express, 2019, 27(7): 10171-10184. |
67 | LIUQ S, XIEX F, ZHANGX Z, et al. Stepwise calibration of plenoptic cameras based on corner features of raw images[J]. Applied Optics, 2020, 59(14): 4209-4219. |
68 | 刘青松, 谢晓方, 张烜喆, 等. 用于聚焦型光场相机标定的棋盘角点检测算法[J]. 光学学报, 2020, 40(14): 153-160. |
LIUQing-song, XIEXiao-fang, ZHANGXuan-zhe, et al. Checkboard corner detection algorithm for calibration of focused plenoptic camera[J]. Acta Optica Sinica, 2020, 40(14): 153-160. (in Chinese) | |
69 | LIUY X, MOF, ALEKSANDROVM, et al. Accurate calibration of standard plenoptic cameras using corner features from raw images[J]. Optics Express, 2021, 29(1): 158-169. |
70 | NOUSIASS, CHADEBECQF, PICHATJ, et al. Corner-based geometric calibration of multi-focus plenoptic cameras[C]//2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 957-965. |
71 | YUJ Y, MCMILLANL, GORTLERS. Scam light field rendering[C]//10th Pacific Conference on Computer Graphics and Applications. Beijing, China: IEEE, 2002: 137-144. |
72 | BOLLESR C, BAKERH H, MARIMONTD H. Epipolar-plane image analysis: An approach to determining structure from motion[J]. International Journal of Computer Vision, 1987, 1(1): 7-55. |
73 | HEBERS, RANFTLR, POCKT. Variational shape from light field[C]//Energy Minimization Methods in Computer Vision and Pattern Recognition. Berlin: Springer, 2013: 66-79. |
74 | YUZ, GUOX Q, LINGH B, et al. Line assisted light field triangulation and stereo matching[C]//2013 IEEE International Conference on Computer Vision. Sydney: IEEE, 2013: 2792-2799. |
75 | JEONH G, PARKJ, CHOEG, et al. Accurate depth map estimation from a lenslet light field camera[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1547-1555. |
76 | TAOM W, HADAPS, MALIKJ, et al. Depth from combining defocus and correspondence using light-field cameras[C]//2013 IEEE International Conference on Computer Vision. Sydney: IEEE, 2013: 673-680. |
77 | TAOM W, SRINIVASANP P, MALIKJ, et al. Depth from shading, defocus, and correspondence using light-field angular coherence[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1940-1948. |
78 | WANGT C, EFROSA A, RAMAMOORTHIR. Depth estimation with occlusion modeling using light-field cameras[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(11): 2170-2181. |
79 | WANGT C, EFROSA A, RAMAMOORTHIR. Occlusion-aware depth estimation using light-field cameras[C]//2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 3487-3495. |
80 | CHENC, LINH T, YUZ, et al. Light field stereo matching using bilateral statistics of surface cameras[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, Ohio, USA: IEEE, 2014: 1518-1525. |
81 | WILLIEMW, PARKI K. Robust light field depth estimation for noisy scene with occlusion[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 4396-4404. |
82 | WANNERS, GOLDLUECKEB. Variational light field analysis for disparity estimation and super-resolution[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(3): 606-619. |
83 | LIJ Q, LUM L, LIZ N. Continuous depth map reconstruction from light fields[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3257-3265. |
84 | ZHANGY B, LVH J, LIUY B, et al. Light-field depth estimation via epipolar plane image analysis and locally linear embedding[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(4): 739-747. |
85 | SUZUKIT, TAKAHASHIK, FUJIIT. Sheared EPI analysis for disparity estimation from light fields[J]. IEICE Transactions on Information and Systems, 2017, E100.D(9): 1984-1993. |
86 | ZHANGS, SHENGH, LIC, et al. Robust depth estimation for light field via spinning parallelogram operator[J]. Computer Vision and Image Understanding, 2016, 145: 148-159. |
87 | SHENGH, ZHAOP, ZHANGS, et al. Occlusion-aware depth estimation for light field using multi-orientation EPIs[J]. Pattern Recognition, 2018, 74: 587-599. |
88 | SCHILLINGH, DIEBOLDM, ROTHERC, et al. Trust your model: Light field depth estimation with inline occlusion handling[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 4530-4538. |
89 | SHINC, JEONH G, YOONY, et al. EPINET: A fully-convolutional neural network using epipolar geometry for depth from light field images[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018: 4748-4757. |
90 | TSAIY J, LIUY L, OUHYOUNGM, et al. Attention-based view selection networks for light-field disparity estimation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New York: AAAI Press, 2020, 34(7): 12095-12103. |
91 | ZHOUW H, LIANGL K, ZHANGH, et al. Scale and orientation aware EPI-patch learning for light field depth estimation[C]//2018 24th International Conference on Pattern Recognition(ICPR). Beijing: IEEE, 2018: 2362-2367. |
92 | WANGX C, TAOC N, WUR M, et al. Light-field-depth-estimation network based on epipolar geometry and image segmentation[J]. Journal of the Optical Society of America. A, 2020, 37(7): 1236-1243. |
93 | WANNERS, MEISTERS, GOLDLUECKEB. Datasets and benchmarks for densely sampled 4D light fields[C]//Proceedings of 18th VMV. Lugano, Switzerland: Università della Svizzera italiana(USI), 2013, 13: 225-226. |
94 | RERABEKM, EBRAHIMIT. New light field image dataset[C]//Proceedings of the 8th International Conference on Quality of Multimedia Experience(QoMEX). Lisbon, Portugal: IEEE, 2016: 1-2. |
95 | DANSEREAUD G, GIRODB, WETZSTEING. LiFF: Light field features in scale and depth[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach: IEEE, 2019: 8034-8043. |
96 | WETZSTEING. The MIT Synthetic Light Field Archive[EB/OL]. [2022-02-22]. . |
97 | HONAUERK, JOHANNSENO, KONDERMANND, et al. A dataset and evaluation methodology for depth estimation on 4D light fields[C]//Computer Vision-ACCV 2016. Taipei: Springer, Cham, 2017: 19-34. |
98 | PAUDYALP, OLSSONR, SJÖSTRÖMM, et al. SMART: A light field image quality dataset[C]//MMSys'16: Proceedings of the 7th International Conference on Multimedia Systems. Klagenfurt, Austria: ACM Press, 2016: 1-6. |
99 | HAZIRBASC, SOYERS G, STAABM C, et al. Deep depth from focus[C]//Proceedings of 2018 Asian Conference on Computer Vision.Perth, Australia: Springer, Cham, 2018: 525-541. |
100 | ZELLERN, QUINTF, STILLAU. A synchronized stereo and plenoptic visual odometry dataset[DB/OL]. [2018]. . |
101 | INRIA. Light field software & datasets[DS/OL]. [2022-02-22]. . |
102 | AHMADW, PALMIERIL, KOCHR, et al. Matching light field datasets from plenoptic cameras 1.0 and 2.0[C]//2018-3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video(3DTV-CON). Helsinki, Finland: IEEE, 2018: 1-4. |
103 | HAHNEC, AGGOUNA, VELISAVLJEVICV, et al. Refocusing distance of a standard plenoptic camera[J]. Optics Express, 2016, 24(19): 21521-21540. |
[1] | 李燕, 严永财. 基于同心圆阵列的结构光主动视觉系统标定算法[J]. 电子学报, 2021, 49(3): 536-541. |
[2] | 张成, 汪东, 沈川, 王美琴, 陈倩文, 韦穗. 部分遮挡三维彩色物体的压缩全息[J]. 电子学报, 2019, 47(1): 33-39. |
[3] | 陈宝华, 邓磊, 陈志祥, 段岳圻, 周杰. 基于即时稠密三维重构的无人机视觉定位[J]. 电子学报, 2017, 45(6): 1294-1300. |
[4] | 邓磊, 陈宝华, 黄思远, 段岳圻, 周杰. 一种基于射线模型的图像定位系统[J]. 电子学报, 2017, 45(1): 1-7. |
[5] | 张聪炫, 陈震, 黎明. 单目图像序列光流三维重建技术研究综述[J]. 电子学报, 2016, 44(12): 3044-3052. |
[6] | 袁红星, 吴少群, 朱仁祥, 安鹏. 加权SIFT流深度迁移的单幅图像2D转3D[J]. 电子学报, 2015, 43(2): 242-247. |
[7] | 张慧滔;于平;胡修炎;张朋. 利用GPU实现单层螺旋CT的三维图像重建[J]. 电子学报, 2011, 39(1): 76-81. |
[8] | 署 光;姚莉秀;杨晓超;左 昕;杨 杰. 基于稀疏形变模型的三维卡通人脸生成[J]. 电子学报, 2010, 38(8): 1798-1802. |
[9] | 侯文广, 陈大为, 丁明跃. 一种基于多重影像实现真实感人脸三维重建的方法[J]. 电子学报, 2008, 36(4): 661-666. |
[10] | 邱兆文;张田文. 文物三维重建关键技术[J]. 电子学报, 2008, 36(12): 2423-2427. |
[11] | 洪伟, 牟轩沁, 蔡元龙. 基于多分辨投影匹配和对应点分析的三维血管轴重建[J]. 电子学报, 2005, 33(10): 1901-1904. |
[12] | 俞恒永;牟轩沁;汤少杰;蔡元龙. 基于先验信息的二值血管锥束迭代重建[J]. 电子学报, 2004, 32(4): 575-579. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||