Abstract

Light field camera calibration is much more complicated by the fact that a single point in the 3D scene appears many times in the image plane. Compared to the previous geometrical models of light field camera, which describe the relationship between 3D point in the scene and 4D light field, we proposed an epipolar-space (EPS) based geometrical model in this paper, which determines the relationship between 3D point in the scene and 3-parameter vector in the EPS. Moreover, a close-form solution for the 3D shape measurement based on the geometrical model is accomplished. Our calibration method includes an initial linear solution and nonlinear optimization with the Levenberg-Marquardt algorithm. The light field model is validated with the commercially available light field camera Lytro iIIum, and the performance of 3D shape measurement is verified by both real scene data and the data set on the internet.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Ray calibration and phase mapping for structured-light-field 3D reconstruction

Zewei Cai, Xiaoli Liu, Xiang Peng, and Bruce Z. Gao
Opt. Express 26(6) 7598-7613 (2018)

Separation of foreground and background from light field using gradient information

Jae Young Lee and Rae-Hong Park
Appl. Opt. 56(4) 1069-1078 (2017)

Unfocused plenoptic metric modeling and calibration

Zewei Cai, Xiaoli Liu, Giancarlo Pedrini, Wolfgang Osten, and Xiang Peng
Opt. Express 27(15) 20177-20198 (2019)

References

  • View by:
  • |
  • |
  • |

  1. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of IEEE International Conference on Computer Vision, IEEE, 1027–1034, 2013.
    [Crossref]
  2. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
    [PubMed]
  3. Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
    [Crossref] [PubMed]
  4. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
    [Crossref] [PubMed]
  5. E. M. Hall, T. W. Fahringer, D. R. Guildenbecher, and B. S. Thurow, “Volumetric calibration of a plenoptic camera,” Appl. Opt. 57(4), 914–923 (2018).
    [Crossref] [PubMed]
  6. S. Wanner and B. Goldluecke, “Globally consistent depth labelling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 41–48, (2012).
  7. N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
    [Crossref]
  8. I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
    [Crossref] [PubMed]
  9. E. H. Adelson, J. R. Bergen, “The plenoptic function and the elements of early vision” Computational Models of Visual Processing, M. Landy and J. A. Movshon, eds. (MIT Press, 1991), 3–20.
  10. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, ACM, 31–42, (1996).
  11. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).
  12. P. Yang, Z. Wang, Y. Yan, W. Qu, H. Zhao, A. Asundi, and L. Yan, “Close-range photogrammetry with light field camera: from disparity map to absolute distance,” Appl. Opt. 55(27), 7477–7486 (2016).
    [Crossref] [PubMed]
  13. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  14. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
    [Crossref]
  15. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
    [Crossref]
  16. S. Heber, W. Yu, and T. Pock, “Neural EPI-volume networks for shape from light field,” in Proceeding of IEEE International Conference on Computer Vision, IEEE, 2271–2279, (2017).
    [Crossref]
  17. L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
    [Crossref]
  18. “JPEG Pleno Database: EPFL Light-field data set,” https://jpeg.org/plenodb/lf/epfl/
  19. M. Řeřábek and T. Ebrahimi, “New light field image dataset,” in Proceedings of the 8th International Workshop on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 218363, (2016).
  20. http://marine.acfr.usyd.edu.au/research/plenoptic-imaging/

2018 (4)

I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

E. M. Hall, T. W. Fahringer, D. R. Guildenbecher, and B. S. Thurow, “Volumetric calibration of a plenoptic camera,” Appl. Opt. 57(4), 914–923 (2018).
[Crossref] [PubMed]

Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
[Crossref] [PubMed]

2017 (3)

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

2016 (3)

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

P. Yang, Z. Wang, Y. Yan, W. Qu, H. Zhao, A. Asundi, and L. Yan, “Close-range photogrammetry with light field camera: from disparity map to absolute distance,” Appl. Opt. 55(27), 7477–7486 (2016).
[Crossref] [PubMed]

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

2005 (1)

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

2000 (1)

Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Asundi, A.

Bok, Y.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

Cai, Z.

Cao, J.

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

Chai, T.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Dai, Q.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of IEEE International Conference on Computer Vision, IEEE, 1027–1034, 2013.
[Crossref]

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

Fahringer, T. W.

Gao, B. Z.

Goldluecke, B.

S. Wanner and B. Goldluecke, “Globally consistent depth labelling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 41–48, (2012).

Guildenbecher, D. R.

Hall, E. M.

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, ACM, 31–42, (1996).

Heber, S.

S. Heber, W. Yu, and T. Pock, “Neural EPI-volume networks for shape from light field,” in Proceeding of IEEE International Conference on Computer Vision, IEEE, 2271–2279, (2017).
[Crossref]

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

Huang, Q.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Jarabo, A.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Jeon, H. G.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Kweon, I. S.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

Lee, K. M.

I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Levoy, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, ACM, 31–42, (1996).

Ling, J.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Liu, X.

Liu, Y.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Lv, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Masia, B.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

Park, I. K.

I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Peng, X.

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of IEEE International Conference on Computer Vision, IEEE, 1027–1034, 2013.
[Crossref]

Pock, T.

S. Heber, W. Yu, and T. Pock, “Neural EPI-volume networks for shape from light field,” in Proceeding of IEEE International Conference on Computer Vision, IEEE, 2271–2279, (2017).
[Crossref]

Qu, W.

Quint, F.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Stilla, U.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Su, L.

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

Thurow, B. S.

Wang, H.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Wang, L.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Wang, Q.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Wang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Wang, Z.

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labelling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 41–48, (2012).

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of IEEE International Conference on Computer Vision, IEEE, 1027–1034, 2013.
[Crossref]

Williem, I. K.

I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

Wu, G.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Xiang, X.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

Yan, L.

Yan, Q.

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

Yan, Y.

Yang, P.

Yu, J.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Yu, W.

S. Heber, W. Yu, and T. Pock, “Neural EPI-volume networks for shape from light field,” in Proceeding of IEEE International Conference on Computer Vision, IEEE, 2271–2279, (2017).
[Crossref]

Yuan, Y.

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

Zeller, N.

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Zhang, C.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Zhang, Q.

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Zhang, Y.

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

Zhang, Z.

Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhao, H.

Appl. Opt. (2)

Computer Science Technical Report CSTR (1)

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR,  2(11), 1–11 (2005).

IEEE J. Sel. Top. Signal Process. (1)

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11 (7), 926–954 (2017).
[Crossref]

IEEE Trans. Circ. Syst. Video Tech. (1)

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (4)

Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 8430574, 1 (2018).
[PubMed]

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features,” IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 287–300 (2017).
[Crossref] [PubMed]

I. K. Williem, I. K. Park, and K. M. Lee, “Robust light field depth estimation using occlusion-noise aware data costs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018).
[Crossref] [PubMed]

ISPRS J. Photogramm. Remote Sens. (1)

N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016).
[Crossref]

Opt. Express (1)

Opt. Lasers Eng. (1)

L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016).
[Crossref]

Other (8)

“JPEG Pleno Database: EPFL Light-field data set,” https://jpeg.org/plenodb/lf/epfl/

M. Řeřábek and T. Ebrahimi, “New light field image dataset,” in Proceedings of the 8th International Workshop on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 218363, (2016).

http://marine.acfr.usyd.edu.au/research/plenoptic-imaging/

S. Wanner and B. Goldluecke, “Globally consistent depth labelling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 41–48, (2012).

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of IEEE International Conference on Computer Vision, IEEE, 1027–1034, 2013.
[Crossref]

E. H. Adelson, J. R. Bergen, “The plenoptic function and the elements of early vision” Computational Models of Visual Processing, M. Landy and J. A. Movshon, eds. (MIT Press, 1991), 3–20.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, ACM, 31–42, (1996).

S. Heber, W. Yu, and T. Pock, “Neural EPI-volume networks for shape from light field,” in Proceeding of IEEE International Conference on Computer Vision, IEEE, 2271–2279, (2017).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 4D light field.
Fig. 2
Fig. 2 Geometrical model of light field cameras. (a) Projection model. (b) Image plane with four element images. (c) Micro-lens array plane.
Fig. 3
Fig. 3 Epipolar-plane images. Bottom: the EPI Iv*y*(u, x) generated by fixing v and y, Right: the EPI Iu*x* (v, y) generated by fixing u and x.
Fig. 4
Fig. 4 Geometrical Model of Light Field Camera.
Fig. 5
Fig. 5 Light field camera and calibration board.
Fig. 6
Fig. 6 Calibration results of the light field camera and the calibration board.
Fig. 7
Fig. 7 Reprojection error. (a) without nonlinear optimization and distortion correction; (b) with nonlinear optimization and distortion correction.
Fig. 8
Fig. 8 Distance error between adjacent feature points.
Fig. 9
Fig. 9 3D shape measurement results. Top: raw images from JPEG Pleno Database [18]; bottom: corresponding 3D measurement results of top images.

Tables (2)

Tables Icon

Table 1 Notation of symbols in the light field model.

Tables Icon

Table 2 Parameters of the light field camera before and after optimization.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

S x c z c + S x m h m ' = S x d h m + S x m h m '
1 f = 1 h m + 1 h m '
S x c z c + S x m h m ' = S f
S=q h m ' b (u u 0 )
x= q h m '2 bd ×[ 1 z c 1 h m ](u u 0 ) x c h m ' z c d
y= q h m '2 bd ×[ 1 z c 1 h m ](v v 0 ) y c h m ' z c d + y 0
[ K B x B y 1 ]= 1 z c [ 0 0 q h m '2 bd h m q h m '2 bd h m ' d 0 x 0 0 0 h m ' d y 0 0 0 0 1 0 ][ x c y c z c 1 ]
K= q h m '2 bd ×[ 1 z c 1 h m ] B x = x 0 x c h m ' z c d B y = y 0 y c h m ' z c d
x=K(u u 0 )+ B x
y=K(v v 0 )+ B y
[ K B x B y 1 ]= 1 z c [ 0 0 q h m '2 bd h m q h m '2 bd h m ' d 0 x 0 0 0 h m ' d y 0 0 0 0 1 0 ][ R t 0 1 ][ x w y w z w 1 ]
[ K B x B y 1 ]= H 4×3 [ X Y 1 ]= 1 z c M 1 M 2 [ X Y 1 ]
H=[ h 1 h 2 h 3 ] M 1 =[ 0 0 m 11 m 12 m 21 0 m 22 0 m 31 m 32 0 0 0 1 0 ]=[ 0 0 q h m '2 bd h m q h m '2 bd h m ' d 0 x 0 0 0 h m ' d y 0 0 0 0 1 0 ] M 2 =[ r 1 r 2 t 0 0 1 ]
h 1 T M 1 T M 1 1 h 1 = h 2 T M 1 T M 1 1 h 2
h 1 T M 1 T M 1 1 h 2 =0
B= M 1 T M 1 1 =[ B 11 0 0 B 12 0 B 21 0 B 22 0 0 B 31 B 32 B 12 B 22 B 32 B 41 ] =[ ( 1 m 12 ) 2 0 0 m 11 ( m 12 ) 2 0 ( 1 m 21 ) 2 0 m 22 ( m 21 ) 2 0 0 ( 1 m 31 ) 2 m 32 ( m 31 ) 2 m 11 ( m 12 ) 2 m 22 ( m 21 ) 2 m 32 ( m 31 ) 2 1+sigma ] sigma= ( m 11 m 12 ) 2 + ( m 22 m 21 ) 2 + ( m 32 m 31 ) 2
b= [ B 11 B 12 B 21 B 22 B 31 B 32 B 41 ] T
h i T B h j = v ij T b
v ij =[ h i1 h j1 h i4 h j1 + h i1 h j4 h i2 h j2 h i4 h j2 + h i2 h j4 h i3 h j3 h i4 h j3 + h i3 h j4 h i4 h j4 ]
[ v 12 T ( v 11 v 22 ) T ]b=0
Vb=0
z c = 1 M 1 1 h 1 [ m 1 m 2 m 3 ] r 1 = z c h 1 [ m 1 m 2 m 3 ] r 2 = z c h 2 r 3 = r 1 × r 2 [ m 1 m 2 m 3 ]t= z c h 2 m 4
[ m 1 m 2 m 3 m 4 ]= M 1
x ¯ =x+(x x 0 )[ k 1 ( x m 2 + y m 2 )+ k 2 ( x m 2 + y m 2 ) 2 ] y ¯ =y+(y y 0 )[ k 1 ( x m 2 + y m 2 )+ k 2 ( x m 2 + y m 2 ) 2 ]
argmin u=1 N u v=1 N v i=1 N j=1 M E i,j u,v ( M 1 , k 1 , k 2 , R i , t i )

Metrics