Abstract

This study presents a robust approach to reconstructing a three dimensional (3-D) translucent object using a single time-of-flight depth camera with simple user marks. Because the appearance of translucent objects depends on the light interaction with the surrounding environment, the measurement using depth cameras is considerably biased or invalid. Although several existing methods attempt to model the depth error of translucent objects, their model remains partial because of object assumptions and its sensitivity to noise. In this study, we introduce a ground plane and piece-wise linear surface model as priors and construct a robust 3-D reconstruction framework for translucent objects. These two depth priors are combined with the depth error model built on the time-of-flight principle. Extensive evaluation of various real data reveals that the proposed method substantially improves the accuracy and reliability of 3-D reconstruction for translucent objects.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras

Ljubomir Jovanov, Aleksandra Pižurica, and Wilfried Philips
Opt. Express 18(22) 22651-22676 (2010)

Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera

Youngmo Jeong, Jonghyun Kim, Jiwoon Yeom, Chang-Kun Lee, and Byoungho Lee
Appl. Opt. 54(35) 10333-10341 (2015)

Hybrid exposure for depth imaging of a time-of-flight depth sensor

Hyunjung Shim and Seungkyu Lee
Opt. Express 22(11) 13393-13402 (2014)

References

  • View by:
  • |
  • |
  • |

  1. B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.
  2. L. Jovanov, A. Pižurica, and Wilfried Philips, “Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras,” Opt. Express,  18, 22651–22676, (2010).
    [Crossref] [PubMed]
  3. H. Schäfer, F. Lenzen, and C. Garbe, “Model based scattering correction in time-of-flight cameras,” Opt. Express,  22, 29835–29846, (2014).
    [Crossref]
  4. J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.
  5. S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.
  6. H. Shim and S. Lee, “Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations,” Opt. Eng. 51(1), 94401–94414 (2012).
    [Crossref]
  7. H. Shim and S. Lee, “Recovering translucent object using a single time-of-flight depth camera,” IEEE Trans. Circuits Syst. Video Technol. 26(5), 841–854, May, (2016).
    [Crossref]
  8. K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.
  9. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
    [Crossref]
  10. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
    [Crossref]
  11. A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. Dorrington, and R. Raskar, “Resolving multipath interference in time-of-flight imaging via modulation frequency diversity and sparse regularization,” Opt. Lett. 39, 1705–1708, (2014).
    [Crossref] [PubMed]
  12. S. Fuchs, “Multipath Interference Compensation in Time-of-Flight Camera Images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 3583–3586.
  13. D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
    [Crossref]
  14. M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
    [Crossref]
  15. D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.
  16. M. Feigin, R. Whyte, A. Bhandari, A. Dorington, and R. Raskar, “Modeling ‘wiggling’ as a multi-path interference problem in AMCW ToF imaging,” Opt. Express,  23(15), 19213–19225, July, (2015).
    [Crossref] [PubMed]
  17. K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).
  18. S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image and Vision Comput. 43, 27–38, November, (2015).
    [Crossref]
  19. H. Shim and S. Lee, “Hybrid exposure for depth imaging of a time-of-flight depth sensor,” Opt. Express,  22, 13393–13402, June, (2014).
    [Crossref] [PubMed]
  20. H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
    [Crossref]
  21. J. Blake, F. Echtler, and C. Kerl, “OpenKinect: Open source drivers for the kinect for windows v2 device,” https://github.com/OpenKinect/libfreenect2 .

2016 (2)

H. Shim and S. Lee, “Recovering translucent object using a single time-of-flight depth camera,” IEEE Trans. Circuits Syst. Video Technol. 26(5), 841–854, May, (2016).
[Crossref]

M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
[Crossref]

2015 (4)

M. Feigin, R. Whyte, A. Bhandari, A. Dorington, and R. Raskar, “Modeling ‘wiggling’ as a multi-path interference problem in AMCW ToF imaging,” Opt. Express,  23(15), 19213–19225, July, (2015).
[Crossref] [PubMed]

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image and Vision Comput. 43, 27–38, November, (2015).
[Crossref]

H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
[Crossref]

2014 (4)

2013 (1)

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

2012 (1)

H. Shim and S. Lee, “Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations,” Opt. Eng. 51(1), 94401–94414 (2012).
[Crossref]

2011 (1)

S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
[Crossref]

2010 (1)

Achuta, K.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Adrian, D.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Alenya, G.

S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
[Crossref]

Ayush, B.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Barsi, C.

Bhandari, A.

Brown, M. S.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

Christopher, B.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Davis, J.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.

Dorington, A.

Dorrington, A.

Feigin, M.

Foix, S.

S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
[Crossref]

Freedman, D.

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

Fuchs, S.

S. Fuchs, “Multipath Interference Compensation in Time-of-Flight Camera Images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 3583–3586.

Garbe, C.

Gupta, M.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

Huhle, B.

B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.

Hullin, M. B.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

Izadi, S.

M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
[Crossref]

Jenke, P.

B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.

Jimenez, D.

D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
[Crossref]

Jovanov, L.

Kadambi, A.

Kim, H.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

Kolb, A.

H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
[Crossref]

Krupka, E.

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

Kubo, H.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

Kweon, I.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

Lee, S.

H. Shim and S. Lee, “Recovering translucent object using a single time-of-flight depth camera,” IEEE Trans. Circuits Syst. Video Technol. 26(5), 841–854, May, (2016).
[Crossref]

S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image and Vision Comput. 43, 27–38, November, (2015).
[Crossref]

H. Shim and S. Lee, “Hybrid exposure for depth imaging of a time-of-flight depth sensor,” Opt. Express,  22, 13393–13402, June, (2014).
[Crossref] [PubMed]

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

H. Shim and S. Lee, “Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations,” Opt. Eng. 51(1), 94401–94414 (2012).
[Crossref]

Lefloch, D.

H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
[Crossref]

Leichter, I.

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

Lenzen, F.

Martin, J.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

Matsushita, Y.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

Mazo, M.

D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
[Crossref]

Mukaigawa, Y.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

Nayar, S. K.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

Palazuelos, S.

D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
[Crossref]

Park, J.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

Philips, Wilfried

Pizarro, D.

D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
[Crossref]

Pižurica, A.

Ramesh, R.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Raskar, R.

Refael, W.

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Rhemann, C.

M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
[Crossref]

Sarbolandi, H.

H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
[Crossref]

Schäfer, H.

Schairer, T.

B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.

Schmidt, M.

M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
[Crossref]

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

Schuon, S.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.

Shim, H.

H. Shim and S. Lee, “Recovering translucent object using a single time-of-flight depth camera,” IEEE Trans. Circuits Syst. Video Technol. 26(5), 841–854, May, (2016).
[Crossref]

S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image and Vision Comput. 43, 27–38, November, (2015).
[Crossref]

H. Shim and S. Lee, “Hybrid exposure for depth imaging of a time-of-flight depth sensor,” Opt. Express,  22, 13393–13402, June, (2014).
[Crossref] [PubMed]

H. Shim and S. Lee, “Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations,” Opt. Eng. 51(1), 94401–94414 (2012).
[Crossref]

Smolin, Y.

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

Strasser, W.

B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.

Tai, Y.-W.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

Tanaka, K.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

Theobalt, C.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.

Thrun, S.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.

Torras, C.

S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
[Crossref]

Whyte, R.

Yagi, Y.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

ACM Trans. Graph. (2)

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging,” ACM Trans. Graph. 34(5), 1–18, October, (2015).
[Crossref]

K. Achuta, W. Refael, B. Ayush, S. Lee, B. Christopher, D. Adrian, and R. Ramesh, “Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 167November, (2013).

Comput Vis Image Und. (1)

H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect,” Comput Vis Image Und.,  139, 1–20, October, (2015).
[Crossref]

IEEE Sens. (1)

M. Feigin, A. Bhandari, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving Multipath Interference in Kinect: An Inverse Problem Approach,” IEEE Sens. 16(10), 3419–3427, (2016).
[Crossref]

IEEE Sens. J. (1)

S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. 11(9), 1917–1926, (2011).
[Crossref]

IEEE Trans. Circuits Syst. Video Technol. (1)

H. Shim and S. Lee, “Recovering translucent object using a single time-of-flight depth camera,” IEEE Trans. Circuits Syst. Video Technol. 26(5), 841–854, May, (2016).
[Crossref]

Image and Vision Comput. (1)

S. Lee and H. Shim, “Skewed stereo time-of-flight camera for translucent object imaging,” Image and Vision Comput. 43, 27–38, November, (2015).
[Crossref]

Image Vision Comput. (1)

D. Jimenez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vision Comput. 32(1), 1–13, January, (2014).
[Crossref]

Opt. Eng. (1)

H. Shim and S. Lee, “Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations,” Opt. Eng. 51(1), 94401–94414 (2012).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

Other (7)

J. Blake, F. Echtler, and C. Kerl, “OpenKinect: Open source drivers for the kinect for windows v2 device,” https://github.com/OpenKinect/libfreenect2 .

B. Huhle, T. Schairer, P. Jenke, and W. Strasser, “Robust non-local denoising of colored depth data,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2008), pp. 1–7.

D. Freedman, E. Krupka, Y. Smolin, I. Leichter, and M. Schmidt, “SRA: Fast removal of general multipath for tof sensors,” In Proceedings of Euroupean Conference in Computer Vision, (Springer, 2014), pp. 234–249.

S. Fuchs, “Multipath Interference Compensation in Time-of-Flight Camera Images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 3583–3586.

K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, “Recovering Transparent Shape from Time-of-Flight Distortion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 4387–4395.

J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, “High quality depth map upsampling for 3D-TOF cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1623–1630.

S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “High-quality scanning using time-of-flight depth superresolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, (IEEE, 2008), pp. 1–7.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Overview of the proposed algorithm.
Fig. 2
Fig. 2 Workflow for separating pb, T and g using user scribbles.
Fig. 3
Fig. 3 Qualitative comparison of recovered depth maps. From left to right: raw data, recovered depth map using E1, E2, E3, E4 and groundtruth. A snapshot of each experimental object is displayed at the top-left corner of each row. Note that E1 indicates the algorithm proposed by [7], and E4 is the proposed algorithm.
Fig. 4
Fig. 4 Depth map reconstruction using Kinect 2.0. From left to right: raw data, recovered depth map using E1, E4 and groundtruth. A snapshot of each experimental object is displayed at the top-left corner of each row. For each depth map, we report the RMS error (mm) and each is computed after applying the median filter. Bold values indicate the best results for a given object.
Fig. 5
Fig. 5 Experimental results of various objects. From left to right: raw data, recovered depth map using E1, and E4. A snapshot of each experimental object is displayed at top-left corner of each row. A lamp cover is made of paper material, whereas a vinyl sheet is a flat, thin vinyl. From the recovered depth maps, we can observe that the proposed method recovers the original shape and reduces the number of undesirable surface notches.

Tables (4)

Tables Icon

Table 1 Summary of variables used in Eq. (1). Please see [7] for details.

Tables Icon

Table 2 List of new variables for Eqs. (3)(5).

Tables Icon

Table 3 Comparison of average depth reconstruction errors using three metrics. Three experimental objects shown in Fig. 3 are used to compute an average of the reconstruction error per metric: rel is a relative error, RMS (mm) is a root mean squared error, and log10 is an absolute difference of log of depth maps. Bold values indicate the best result for a given error metric.

Tables Icon

Table 4 Reconstruction errors of a depth map based on the choice of energy function. We use an RMS error as an error metric and mm as its unit. The denotation of “−” indicates that no post-processing is applied before errors are computed, whereas “Median” refers to median filtering used to eliminate outliers before errors are computed. Note that E1 indicates the algorithm proposed by [7], and E4 is the proposed algorithm. Bold values indicate the best two results for a given object.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

d ˙ = f ( d f , d b , ξ , I ˙ , I b ) = c 2 tan 1 { A ( Q 3 b Q 4 b ) + B ( Q 3 f Q 4 f ) A ( Q 1 b Q 2 b ) + B ( Q 1 f Q 2 f ) } ,
A = ( 1 ξ ) L in + ( 1 ξ ) 2 L in 2 + 4 ξ 2 I b I ˙ d f 4 2 L in d f 2 , B = ( 1 ξ ) d f 2 .
[ d ^ f , ξ ^ ] = arg min d f , ξ f ( d f , ξ , d b ) d ˙ 2 .
E d = 1 | T | p f T f ( p f , ξ , p b ) p ˙ 2 .
E g = 1 | T | p f T p f ( z ) g 2 ,
E s = 1 | N | p f N h ( p f ) 2 ,
[ p f ^ , ξ ^ ] = arg min p f , ξ α E d + β E g + γ E s , 0 α , β , γ 1 , α + β + γ = 1 .

Metrics