Abstract

With one single photon camera (SPC), imaging under ultra weak-lighting conditions may have wide-ranging applications ranging from remote sensing to night vision, but it may seriously suffer from the problem of under-sampled inherent in SPC detection. Some approaches have been proposed to solve the under-sampled problem by detecting the objects many times to generate high-resolution images and performing noise reduction to suppress the Poission noise inherent in low-flux operation. To address the under-sampled problem more effectively, a new approach is developed in this paper to reconstruct high-resolution images with lower-noise by seamlessly integrating low-light-level imaging with deep learning. In our new approach, all the objects are detected only once by SPC, where a deep network is learned to reduce noise and reconstruct high-resolution images from the detected noisy under-sampled images. In order to demonstrate our proposal is feasible, we first select a special category to verify by experiment, which are human faces. Such deep network is able to recover high-resolution and lower-noise face images from new noisy under-sampled face images and the resolution can achieve 4× up-scaling factor. Our experimental results have demonstrated that our proposed method can generate high-quality images from only ~0.2 detected signal photon per pixel.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Photon-limited imaging through scattering medium based on deep learning

Lei Sun, Jianhong Shi, Xiaoyan Wu, Yiwei Sun, and Guihua Zeng
Opt. Express 27(23) 33120-33134 (2019)

High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network

Hao Zhang, Chunyu Fang, Xinlin Xie, Yicong Yang, Wei Mei, Di Jin, and Peng Fei
Biomed. Opt. Express 10(3) 1044-1063 (2019)

Deep-STORM: super-resolution single-molecule microscopy by deep learning

Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman
Optica 5(4) 458-464 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
    [Crossref]
  2. R. Horisaki, R. Takagi, and J. Tanida, “International Conference on Intelligent Robots and Systems,” Opt. Express 24, 13738 (2016).
    [Crossref] [PubMed]
  3. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
    [Crossref] [PubMed]
  4. W. Ruyten, “CCD arrays, cameras, and displays, by Gerald C. Holst,” Opt. Photonics News 8, 54 (1997).
  5. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–45 (2013).
    [Crossref] [PubMed]
  6. Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
    [Crossref]
  7. D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
    [Crossref]
  8. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
    [Crossref] [PubMed]
  9. A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
    [Crossref]
  10. T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object recognition through scattering media,” Opt. Express 23, 33902 (2015).
    [Crossref]
  11. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738 (2016).
    [Crossref] [PubMed]
  12. T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
    [Crossref]
  13. B. Heshmat, G. Satat, M. Tancik, O. Gupta, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
    [Crossref] [PubMed]
  14. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.
  15. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.
  16. W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.
  17. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3431–3440.
  18. J. Shore and R. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Inf. Theory 26, 26–37 (1980).
    [Crossref]
  19. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.
  20. F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), p. 178.

2017 (2)

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

B. Heshmat, G. Satat, M. Tancik, O. Gupta, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
[Crossref] [PubMed]

2016 (3)

2015 (2)

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object recognition through scattering media,” Opt. Express 23, 33902 (2015).
[Crossref]

2014 (3)

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

2013 (1)

2012 (1)

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

1997 (1)

W. Ruyten, “CCD arrays, cameras, and displays, by Gerald C. Holst,” Opt. Photonics News 8, 54 (1997).

1980 (1)

J. Shore and R. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Inf. Theory 26, 26–37 (1980).
[Crossref]

Acosta, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Aitken, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Aitken, A. P.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Ando, T.

Aspden, R. S.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

Bell, J. E. C.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

Bengio, Y.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Bishop, R.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Boyd, R. W.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

Brady, D. J.

Bronzi, D.

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

Caballero, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Carin, L.

Colao, A.

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Courville, A.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Cunningham, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Darrell, T.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3431–3440.

Deledalle, C. A.

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

Durini, D.

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

Endo, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Fergus, R.

F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), p. 178.

Goodfellow, I.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Goyal, V. K.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Guang-jie, Z.

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Gupta, O.

Harmany, Z.

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

Hasegawa, S.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Heshmat, B.

Hirayama, R.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Horisaki, R.

Huszar, F.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Johnson, R.

J. Shore and R. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Inf. Theory 26, 26–37 (1980).
[Crossref]

Kakue, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Kirmani, A.

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Kittle, D.

Ledig, C.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Li, F. F.

F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), p. 178.

Liao, X.

Liu, Z.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.

Llull, P.

Long, J.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3431–3440.

Luo, P.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.

Lussana, R.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Mirza, M.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Morris, P. A.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

Nagahama, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Nishitsuji, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Ozair, S.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Padgett, M. J.

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

Perona, P.

F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), p. 178.

Pouget-Abadie, J.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Qing, Z.

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Raskar, R.

Rueckert, D.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Ruyten, W.

W. Ruyten, “CCD arrays, cameras, and displays, by Gerald C. Holst,” Opt. Photonics News 8, 54 (1997).

Salmon, J.

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

Sano, M.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Sapiro, G.

Satat, G.

Shapiro, J. H.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Shelhamer, E.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3431–3440.

Shi, W.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Shimobaba, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Shin, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Shiraki, A.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Shore, J.

J. Shore and R. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Inf. Theory 26, 26–37 (1980).
[Crossref]

Takagi, R.

Takahashi, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Tancik, M.

Tang, X.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.

Tanida, J.

Tejani, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Theis, L.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Tisa, S.

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

Tosi, A.

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

Totz, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

Venkatraman, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Villa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

Wang, X.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.

Wang, Z.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

Warde-Farley, D.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Wen-kai, Y.

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Willett, R.

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

Wong, F. N.

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Wong, F. N. C.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Xu, B.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

Xu, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Xue-feng, L.

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Xu-ri, Y.

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Yang, J.

Yuan, X.

Zappa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

IEEE J. Sel. Top. Quantum Electron. (1)

D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, and D. Durini, “100 000 Frames/s 64 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging,” IEEE J. Sel. Top. Quantum Electron. 20, 354–363 (2014).
[Crossref]

IEEE Trans. Inf. Theory (1)

J. Shore and R. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Inf. Theory 26, 26–37 (1980).
[Crossref]

J. Math. Imaging Vis. (1)

J. Salmon, Z. Harmany, C. A. Deledalle, and R. Willett, “Poisson Noise Reduction with NonLocal PCA,” J. Math. Imaging Vis. 48, 279–294 (2014).
[Crossref]

Nat. Commun. (2)

P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6, 5913 (2015).
[Crossref] [PubMed]

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Opt. Commun. (1)

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2017).
[Crossref]

Opt. Express (5)

Opt. Photonics News (1)

W. Ruyten, “CCD arrays, cameras, and displays, by Gerald C. Holst,” Opt. Photonics News 8, 54 (1997).

Opt. Precis. Eng. (1)

Y. Wen-kai, Y. Xu-ri, L. Xue-feng, Z. Guang-jie, and Z. Qing, “Compressed sensing for ultra-weak light counting imaging,” Opt. Precis. Eng. 20, 2283–2292 (2012).
[Crossref]

Science (1)

A. Kirmani, D. Venkatraman, D. Shin, A. Colao, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58 (2014).
[Crossref]

Other (6)

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3730–3738.

F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2004), p. 178.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of International Conference on Neural Information Processing Systems (2014), pp. 2672–2680.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 105–114.

W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1874–1883.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3431–3440.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Setup for the experimental demonstration.
Fig. 2
Fig. 2 Our network structure.
Fig. 3
Fig. 3 Experimental results. Six examples of the training samples. (a) the original subhigh resolution objects. (b) the noisy under-sampled images of (a) in case 1. (c) the noisy under-sampled images of (a) in case 2. (d) the noisy under-sampled images of (a) in case 3. The resolution of (a) is 128 × 128 and the resolution of (b) (c) (d) is 32 × 32.
Fig. 4
Fig. 4 Experimental results. (a)~(h) are the eight examples of the testing samples. For each example, the first image is the original object image, following by three noisy under-sampled images that detected at three different light intensity, and the last three images are the reconstructed images corresponding to the three noisy under-sampled detected images respectively. The resolution of original objects and reconstructed images is 128 × 128, but the resolution of the detected noisy under-sampled images is only 32 × 32.
Fig. 5
Fig. 5 Experimental results of CNN. (a)~(h) are the eight examples of the testing samples. For each example, the first image is the original object image, following by three noisy under-sampled images that detected at three different light intensity, and the last three images are the reconstructed images by CNN corresponding to the three noisy under-sampled detected images respectively.
Fig. 6
Fig. 6 Experimental results. Ten examples of the testing non-face samples in case1. (a) the original subhigh resolution objects. (b) the noisy under-sampled images of (a) in case1. (c) the reconstructed images of (b).

Tables (5)

Tables Icon

Table 1 PSNR(dB) of the reconstructed images in three weak-light-level conditions

Tables Icon

Table 2 PSNR(dB) of the eight test samples in three weak-light-level conditions

Tables Icon

Table 3 average number of photons per pixel in three weak-light-level conditions

Tables Icon

Table 4 PSNR(dB) of the images reconstructed by CNN in three cases

Tables Icon

Table 5 PSNR(dB) of the images reconstructed by CNN in three weak-light-level conditions

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I S R = P S ( w * I L R × b )
M S E = 1 N x × N y i = 1 N x × N y ( y i y i ) 2
C E H ( p , q ) = x p ( x ) l o g q ( x )
P S N R ( d B ) = 10 × l g 255 2 M S E .
I = 1 N x × N y ( i = 1 N x   1 × N x   1 I i   1 i = 1 N x   1 × N x   1 I i )

Metrics