Abstract

Mask-based lensless imagers are smaller and lighter than traditional lensed cameras. In these imagers, the sensor does not directly record an image of the scene; rather, a computational algorithm reconstructs it. Typically, mask-based lensless imagers use a model-based reconstruction approach that suffers from long compute times and a heavy reliance on both system calibration and heuristically chosen denoisers. In this work, we address these limitations using a bounded-compute, trainable neural network to reconstruct the image. We leverage our knowledge of the physical system by unrolling a traditional model-based optimization algorithm, whose parameters we optimize using experimentally gathered ground-truth data. Optionally, images produced by the unrolled network are then fed into a jointly-trained denoiser. As compared to traditional methods, our architecture achieves better perceptual image quality and runs 20$\times$ faster, enabling interactive previewing of the scene. We explore a spectrum between model-based and deep learning methods, showing the benefits of using an intermediate approach. Finally, we test our network on images taken in the wild with a prototype mask-based camera, demonstrating that our network generalizes to natural images.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Spectral-depth imaging with deep learning based reconstruction

Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, and Xuejin Chen
Opt. Express 27(26) 38312-38325 (2019)

Machine-learning enables image reconstruction and classification in a “see-through” camera

Zhimeng Pan, Brian Rodriguez, and Rajesh Menon
OSA Continuum 3(3) 401-409 (2020)

Image reconstruction through dynamic scattering media based on deep learning

Yiwei Sun, Jianhong Shi, Lei Sun, Jianping Fan, and Guihua Zeng
Opt. Express 27(11) 16032-16046 (2019)

References

  • View by:
  • |
  • |
  • |

  1. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.
  2. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics: concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001).
    [Crossref]
  3. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
    [Crossref]
  4. G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.
  5. K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.
  6. F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.
  7. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
    [Crossref]
  8. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).
  9. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
    [Crossref]
  10. P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.
  11. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.
  12. G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.
  13. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).
  14. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
    [Crossref]
  15. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
    [Crossref]
  16. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  17. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
    [Crossref]
  18. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
    [Crossref]
  19. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.
  20. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.
  21. S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).
  22. S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).
  23. J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.
  24. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.
  25. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.
  26. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).
  27. M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.
  28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  29. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

2018 (4)

2017 (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

2014 (1)

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

2010 (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

2009 (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

2007 (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

2001 (1)

Adams, J. K.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Antipa, N.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Avants, B. W.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Baraniuk, R. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Barbastathis, G.

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Boominathan, V.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Bostan, E.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Boyd, S.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Bradski, G.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Chu, E.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Deng, M.

Diamond, S.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Eckstein, J.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Efros, A. A.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Erickson, E.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Gill, P. R.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Gregor, K.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

Heckel, R.

Heide, F.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Horisaki, R.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Hoshizawa, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Huiskes, M. J.

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

Ichioka, Y.

Irie, S.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Ishida, K.

Isola, P.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Kabir, S.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Kato, S.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Kellam, M.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Kondou, N.

Kumagai, T.

Kuo, G.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

LeCun, Y.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

Lee, J.

Lew, M. S.

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

Li, H.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Li, S.

Li, Y.

Liu, F. L.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Madhavan, V.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

Mildenhall, B.

Miyatake, S.

Miyazaki, D.

Monakhova, K.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Morimoto, T.

Nakamura, Y.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Nehmetallah, G.

Ng, R.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Nguyen, T.

Oare, P.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

Ogura, Y.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Parikh, N.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Peleato, B.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Robinson, J. T.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

Roth, S.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Sao, M.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Schmidt, U.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

Schneider, A.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Shechtman, E.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Shimano, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Sinha, A.

Sitzmann, V.

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

Stork, D. G.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Sun, J.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Tajima, K.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

Tanida, J.

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Tian, L.

Tringali, J.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

Veeraraghavan, A.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Vercosa, D. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Waller, L.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Wang, O.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Wetzstein, G.

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

Xu, Z.

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

Xue, Y.

Yamada, K.

Yanny, K.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Ye, F.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Yurtsever, J.

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

Zhang, R.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

Appl. Opt. (1)

Foundations Trends Mach. Learning (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learning 3(1), 1–122 (2010).
[Crossref]

Int. J. on Adv. Syst. Meas. (1)

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. on Adv. Syst. Meas. 7, 4 (2014).

Opt. Express (1)

Opt. Rev. (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-Dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007).
[Crossref]

Optica (3)

Sci. Adv. (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

SIAM J. Imaging Sci. (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009).
[Crossref]

Other (19)

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal Escher Sensors: Pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–3.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 76–82.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017), pp. CTu3B–2.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills: Lensless imaging with rolling shutter,” arXiv preprint arXiv:1905.13221 (2019).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “3D Fluorescence Microscopy with DiffuserCam,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM3E–4.

K. Yanny, N. Antipa, R. Ng, and L. Waller, “Miniature 3D fluorescence microscope using random microlenses,” in Optics and the Brain, (Optical Society of America, 2019), pp. BT3A–4.

F. L. Liu, V. Madhavan, N. Antipa, G. Kuo, S. Kato, and L. Waller, “Single-shot 3D fluorescence microscopy with Fourier DiffuserCam,” in Novel Techniques in Microscopy, (Optical Society of America, 2019), pp. NS2B–3.

K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, (Omnipress, 2010), pp.399–406.

U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 2774–2781.

S. Diamond, V. Sitzmann, S. Boyd, G. Wetzstein, and F. Heide, “Dirty pixels: Optimizing image classification architectures for raw sensor data,” arXiv preprint arXiv:1701.06487 (2017).

S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041 (2017).

J. Sun, H. Li, and Z. Xu, “Deep ADMM-Net for compressive sensing MRI,” Advances in neural information processing systems, (2016), pp. 10–18.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 586–595.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

M. J. Huiskes and M. S. Lew, “The MIR Flickr Retrieval Evaluation,” in MIR ’08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, (ACM, New York, NY, USA, 2008.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Lensless learning repository,” https://github.com/Waller-Lab/LenslessLearning/ , (2019). Accessed: 2019-09-05.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “FlatCam: Replacing lenses with masks and computation,” in Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on, (IEEE, 2015), pp. 663–666.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Overview of our imaging pipeline. During training, images are displayed on a computer screen and captured simultaneously with both a lensed and a lensless camera to form training pairs, with the lensed images serving as ground truth labels. The lensless measurements are fed into a model-based network which incorporates knowledge about the physics of the imager. The output of the network is compared with the labels using a loss function and the network parameters are updated through backpropagation. During operation, the lensless imager takes measurements and the trained model-based network is used to reconstruct the images, providing a large speedup in reconstruction time and an improvement in image quality.
Fig. 2.
Fig. 2. Networks on a scale from classic to deep. We will present several networks specifically designed for lensless imaging (Le-ADMM, Le-ADMM*, and Le-ADMM-U). We compare these to classic approaches, which have no learnable parameters, and to purely deep methods which do not include any knowledge of the imaging model. We will show the utility of using an algorithm in this middle range compared to a purely classic or deep method. $\Theta$ summarizes the parameters that are learned for each network as discussed in Section 4.
Fig. 3.
Fig. 3. Model-based network architecture. The input measurement and the calibration PSF are first fed into N layers of unrolled Le-ADMM. At each layer, the updates corresponding to $\mathbf {S}^{k+1}$ in Eq. (4) are applied. The output of this can be fed into an optional denoiser network. The network parameters are updated based on a loss function comparing the output image to the lensed image. Red arrows represent backpropagation through the network parameters.
Fig. 4.
Fig. 4. Test set results, with the raw DiffuserCam measurement (contrast stretched) and the ground truth images from the lensed camera for reference. Le-ADMM (71 ms) has similar image quality to converged ADMM (1.5 s) and better image quality than bounded ADMM (71 ms). Le-ADMM* and Le-ADMM-U have noticeably better visual image quality. The U-Net by itself is unable to reconstruct the appropriate colors and lacks detail.
Fig. 5.
Fig. 5. Network performance on test set. (a) Here we plot the MSE, LPIPS, and Data Fidelity values for all image pairs in our test set. On average, our learned networks (green) are more similar to the ground truth lensed images (lower MSE and LPIPS) than 5 iterations of ADMM. Furthermore, our networks have comparable performance to ADMM (100), which takes 20$\times$ longer than Le-ADMM and Le-ADMM-U. However, the data fidelity term is higher for the learned methods, indicating that these reconstructions are less consistent with the image formation model. (b) Here we plot performance after each layer (or equivalently, each ADMM iteration) in our network, showing that MSE and LPIPS generally decrease throughout the layers. The U-Net denoiser layer in Le-ADMM-U significantly decreases the LPIPS and MSE values, at the cost of data fidelity.
Fig. 6.
Fig. 6. Network performance on objects in the wild (toys and a plant) captured with our lensless camera. We show the raw measurement (contrast stretched) on the top row, followed by converged ADMM, ADMM bounded to 5 iterations, our learned networks, and U-Net for comparison. Our learned networks have similar or better image quality as converged ADMM, and Le-ADMM-U has the best image quality. For instance, Le-ADMM-U is able to capture the details in the sideways plant (second column from left) and the eye of the toy duck (right). The U-Net alone has good image quality, but is missing some colors and details (e.g. the first image is washed out and the nose of the alligator toy is miscolored).
Fig. 7.
Fig. 7. Effect of Training Size. Here we vary the number of images in the training set and plot the LPIPS score after 5 epochs. Here we see that Le-ADMM-U performs better and converges faster than a U-Net alone. Le-ADMM does not improve as the number of training images increases, since it has so few parameters.

Tables (4)

Tables Icon

Table 1. Loss functions

Tables Icon

Table 2. Network performance on test set

Tables Icon

Table 3. Network architecture for U-Net used in Le-ADMM-U

Tables Icon

Table 4. Network architecture for U-Net used in Le-ADMM*.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

b ( x , y ) = crop [ h ( x , y ) x ( x , y ) ] = C H x ,
x ^ = arg min x 0 1 2 b C H x 2 2 + τ Ψ x 1 ,
x ^ = arg min w 0 , u , v 1 2 b C v 2 2 + τ u 1 , s . t . v = H x , u = Ψ x , w = x .
u k + 1 T τ / μ 2 ( Ψ x k + α 2 k / μ 2 ) sparsifying soft-threshold v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 H T H + μ 2 Ψ T Ψ + μ 3 I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 ( H x k + 1 v k + 1 ) dual for  v α 2 k + 1 α 2 k + μ 2 ( Ψ x k + 1 u k + 1 ) dual for  u α 3 k + 1 α 3 k + μ 3 ( x k + 1 w k + 1 ) dual for  w where  r k = ( ( μ 3 w k + 1 α 3 k ) + Ψ T ( μ 2 u k + 1 α 2 k ) + H T ( μ 1 v k + 1 α 1 k ) ) .
S k + 1 { u k + 1 T τ k ( Ψ ( x k ) + α 2 k / μ 2 k ) sparsifying soft-thresholding v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 k H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 k + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 k H T H + μ 2 k Ψ T Ψ + μ 3 k I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 k ( H x k + 1 v k + 1 ) dual for v α 2 k + 1 α 2 k + μ 2 k ( Ψ ( x k + 1 ) u k + 1 ) dual for u α 3 k + 1 α 3 k + μ 3 k ( x k + 1 w k + 1 ) dual for w where  r k = ( ( μ 3 k w k + 1 α 3 k ) + Ψ T ( μ 2 k u k + 1 α 2 k ) + H T ( μ 1 k v k + 1 α 1 k ) ) .
S k + 1 { u k + 1 N ( x k ) network regularizer v k + 1 ( C T C + μ 1 I ) 1 ( α 1 k + μ 1 k H x k + C T b ) least-squares update w k + 1 max ( α 3 k / μ 3 k + x k , 0 ) enforce non-negativity x k + 1 ( μ 1 k H T H + μ 2 k I + μ 3 k I ) 1 r k least-squares update α 1 k + 1 α 1 k + μ 1 k ( H x k + 1 v k + 1 ) dual for v α 3 k + 1 α 3 k + μ 3 k ( x k + 1 w k + 1 ) dual for w where  r k = ( ( μ 3 k w k + 1 α 3 k ) + μ 2 k u k + 1 + H T ( μ 1 k v k + 1 α 1 k ) ) .

Metrics