Abstract

The spectral reflectance of objects provides intrinsic information on material properties that have been proven beneficial in a diverse range of applications, e.g., remote sensing, agriculture and diagnostic medicine, to name a few. Existing methods for the spectral reflectance recovery from RGB or monochromatic images either ignore the effect from the illumination or implement/optimize the illumination under the linear representation assumption of the spectral reflectance. In this paper, we present a simple and efficient convolutional neural network (CNN)-based spectral reflectance recovery method with optimal illuminations. Specifically, we design illumination optimization layer to optimally multiplex illumination spectra in a given dataset or to design the optimal one under physical restrictions. Meanwhile, we develop the nonlinear representation for spectral reflectance in a data-driven way and jointly optimize illuminations under this representation in a CNN-based end-to-end architecture. Experimental results on both synthetic and real data show that our method outperforms the state-of-the-arts and verifies the advantages of deeply optimal illumination and nonlinear representation of the spectral reflectance.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Trichromatic red-green-blue camera used for the recovery of albedo and reflectance of rough-textured surfaces under different illumination conditions

Clara Plata, Juan Luis Nieves, Eva M. Valero, and Javier Romero
Appl. Opt. 48(19) 3643-3653 (2009)

Deep spectral reflectance and illuminant estimation from self-interreflections

Rada Deeb, Joost Van de Weijer, Damien Muselet, Mathieu Hebert, and Alain Tremeau
J. Opt. Soc. Am. A 36(1) 105-114 (2019)

Spectral reflectivity recovery from the tristimulus values using a hybrid method

Bog G. Kim, Jeong-won Han, and Soo-been Park
J. Opt. Soc. Am. A 29(12) 2612-2621 (2012)

References

  • View by:
  • |
  • |
  • |

  1. B. Qi, C. Zhao, E. Youn, and C. Nansen, “Use of weighting algorithms to improve traditional support vector machine based classifications of reflectance data,” Opt. Express 19(27), 26816–26826 (2011).
    [Crossref]
  2. X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
    [Crossref]
  3. M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
    [Crossref]
  4. J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
    [Crossref]
  5. D. B. Gillis, J. H. Bowles, M. J. Montes, and W. J. Moses, “Propagation of sensor noise in oceanic hyperspectral remote sensing,” Opt. Express 26(18), A818–A831 (2018).
    [Crossref]
  6. S. Nakariyakul and D. P. Casasent, “Fast feature selection algorithm for poultry skin tumor detection in hyperspectral data,” J. Food Eng. 94(3-4), 358–365 (2009).
    [Crossref]
  7. L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
    [Crossref]
  8. G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
    [Crossref]
  9. A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2013).
  10. S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).
  11. R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
    [Crossref]
  12. W. M. Porter and H. T. Enmark, “A system overview of the airborne visible/infrared imaging spectrometer (aviris),” in Annual Technical Symposium, (1987), pp. 22–31.
  13. A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010).
    [Crossref]
  14. B. K. Ford, M. R. Descour, and R. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9(9), 444–453 (2001).
    [Crossref]
  15. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot image mapping spectrometer (ims) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010).
    [Crossref]
  16. N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
    [Crossref]
  17. R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Proceedings of European Conference on Computer Vision (2014), pp. 186–201.
  18. A. Robles-Kelly, “Single image spectral reconstruction for multimedia applications,” in Proceedings of international conference on Multimedia (2015), pp. 251–260.
  19. Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
    [Crossref]
  20. B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision (2016) pp. 19–34.
  21. Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.
  22. A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from rgb,” in The IEEE International Conference on Computer Vision Workshops, (2017).
  23. Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).
  24. Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).
  25. S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).
  26. Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).
  27. M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
    [Crossref]
  28. A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2011), pp. 193–200.
  29. J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.
  30. C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
    [Crossref]
  31. S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
    [Crossref]
  32. A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).
  33. J. P. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” J. Opt. Soc. Am. A 6(2), 318–322 (1989).
    [Crossref]
  34. M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.
  35. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of International Conference on Machine Learning (2010), pp. 807–814.
  36. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).
  37. D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proceedings of International Conference on Learning Representations (2015).
  38. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
    [Crossref]
  39. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
    [Crossref]
  40. F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
    [Crossref]

2018 (3)

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

D. B. Gillis, J. H. Bowles, M. J. Montes, and W. J. Moses, “Propagation of sensor noise in oceanic hyperspectral remote sensing,” Opt. Express 26(18), A818–A831 (2018).
[Crossref]

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

2014 (1)

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

2013 (1)

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

2011 (1)

2010 (4)

A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010).
[Crossref]

L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot image mapping spectrometer (ims) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010).
[Crossref]

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
[Crossref]

2009 (1)

S. Nakariyakul and D. P. Casasent, “Fast feature selection algorithm for poultry skin tumor detection in hyperspectral data,” J. Food Eng. 94(3-4), 358–365 (2009).
[Crossref]

2006 (4)

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
[Crossref]

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

2003 (1)

G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
[Crossref]

2001 (1)

1995 (1)

R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

1993 (1)

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

1989 (1)

Alvarez-Gila, A.

A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from rgb,” in The IEEE International Conference on Computer Vision Workshops, (2017).

Anderson, M. E.

R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Arad, B.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision (2016) pp. 19–34.

Ba, J. L.

D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proceedings of International Conference on Learning Representations (2015).

Baarstad, I.

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

Balas, C. J.

G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
[Crossref]

Barloon, P. J.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Basedow, R. W.

R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Ben-Ezra, M.

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
[Crossref]

Ben-Shahar, O.

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision (2016) pp. 19–34.

Bioucas-Dias, J. M.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Boardman, J. W.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Boussema, M. R.

M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

Bowles, J. H.

Brown, M. S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).

R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Proceedings of European Conference on Computer Vision (2014), pp. 186–201.

Camps-Valls, G.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Cao, X.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Carmer, D. C.

R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

Casasent, D. P.

S. Nakariyakul and D. P. Casasent, “Fast feature selection algorithm for poultry skin tumor detection in hyperspectral data,” J. Food Eng. 94(3-4), 358–365 (2009).
[Crossref]

Chakrabarti, A.

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2011), pp. 193–200.

Chanussot, J.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Chen, C.

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

Chi, C.

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
[Crossref]

Descour, M. R.

Enmark, H. T.

W. M. Porter and H. T. Enmark, “A system overview of the airborne visible/infrared imaging spectrometer (aviris),” in Annual Technical Symposium, (1987), pp. 22–31.

Fletcher-Holmes, D. W.

Ford, B. K.

Fu, Y.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

Fuchs, C.

M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.

Fukuda, H.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Gao, L.

Garman, J.

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

Garrote, E.

A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from rgb,” in The IEEE International Conference on Computer Vision Workshops, (2017).

Gat, N.

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

Gillis, D. B.

Goetz, A. F. H.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Gorman, A.

Grossberg, M. D.

J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.

Gu, L.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

Hagen, N.

Hallikainen, J.

Han, S.

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

Haneishi, H.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Harvey, A. R.

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).

Heidebrecht, K. B.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Hinton, G. E.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of International Conference on Machine Learning (2010), pp. 807–814.

Huang, H.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

Iso, D.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

Iwama, R.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Jaaskelainen, T.

Jia, Y.

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

Joo Kim, S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).

Kanazawa, H.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Kaspersen, P.

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

Kester, R. T.

Kingma, D. P.

D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proceedings of International Conference on Learning Representations (2015).

Kishimoto, J.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Kitahara, M.

M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.

Kollias, N.

G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
[Crossref]

Kruse, F. A.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Lam, A.

A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2013).

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

Lee, M.-H.

J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.

Lefkoff, A. B.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Lensch, H. P.

M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.

Li, H.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Li, M. D.

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

Liu, D.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

Loghmari, M. A.

M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
[Crossref]

Løke, T.

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

Lynch, R. M.

Meng, D.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Mitsunaga, T.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

Montes, M. J.

Moses, W. J.

Naceur, M. S.

M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
[Crossref]

Nair, V.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of International Conference on Machine Learning (2010), pp. 807–814.

Nakariyakul, S.

S. Nakariyakul and D. P. Casasent, “Fast feature selection algorithm for poultry skin tumor detection in hyperspectral data,” J. Food Eng. 94(3-4), 358–365 (2009).
[Crossref]

Nansen, C.

Nasrabadi, N. M.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Nayar, S. K.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.

Nguyen, R. M. H.

R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Proceedings of European Conference on Computer Vision (2014), pp. 186–201.

Nie, S.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Ohyama, N.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Okabe, T.

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

Ono, N.

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Paisley, J.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Park, J.-I.

J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.

Parkkinen, J. P.

Plaza, A.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Pollefeys, M.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).

Porter, W. M.

W. M. Porter and H. T. Enmark, “A system overview of the airborne visible/infrared imaging spectrometer (aviris),” in Annual Technical Symposium, (1987), pp. 22–31.

Prasad, D. K.

R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Proceedings of European Conference on Computer Vision (2014), pp. 186–201.

Qi, B.

Randeberg, L. L.

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).

Robles-Kelly, A.

A. Robles-Kelly, “Single image spectral reconstruction for multimedia applications,” in Proceedings of international conference on Multimedia (2015), pp. 251–260.

Sato, I.

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2013).

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

Sato, Y.

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

Scheunders, P.

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

Scriven, G.

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

Shapiro, A. T.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

Shi, Z.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

Stamatas, G. N.

G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
[Crossref]

Subpa-Asa, A.

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).

Svaasand, L. O.

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

Tkaczyk, T. S.

Tsuchida, M.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

van de Weijer, J.

A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from rgb,” in The IEEE International Conference on Computer Vision Workshops, (2017).

Wang, L.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

Wu, F.

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

Wug Oh, S.

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).

Xiong, Z.

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Xu, L.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Xu, Z.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Yamaguchi, M.

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Yasuma, F.

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

Yoo, H.

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
[Crossref]

Youn, E.

Zhang, D.

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

Zhang, J.

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

Zhang, L.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Zhang, T.

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).

Zhao, C.

Zheng, Y.

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Zhou, F.

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

Zickler, T.

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2011), pp. 193–200.

IEEE Geosci. Remote Sens. Mag. (1)

J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag. 1(2), 6–36 (2013).
[Crossref]

IEEE Trans. Comput. Imaging (1)

Y. Fu, Y. Zheng, L. Zhang, and H. Huang, “Spectral reflectance recovery from a single rgb image,” IEEE Trans. Comput. Imaging 4(3), 382–394 (2018).
[Crossref]

IEEE Trans. Geosci. Electron. (1)

M. A. Loghmari, M. S. Naceur, and M. R. Boussema, “A spectral and spatial source separation of multispectral images,” IEEE Trans. Geosci. Electron. 44(12), 3659–3673 (2006).
[Crossref]

IEEE Trans. Image Processing (3)

X. Cao, F. Zhou, L. Xu, D. Meng, Z. Xu, and J. Paisley, “Hyperspectral image classification with markov random fields and a convolutional neural network,” IEEE Trans. Image Processing 27(5), 2354–2367 (2018).
[Crossref]

F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range and spectrum,” IEEE Trans. Image Processing 19(9), 2241–2253 (2010).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing 13(4), 600–612 (2004).
[Crossref]

Int J Comput Vis (2)

C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int J Comput Vis 86(2-3), 140–151 (2010).
[Crossref]

S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” Int J Comput Vis 110(2), 172–184 (2014).
[Crossref]

J. Food Eng. (1)

S. Nakariyakul and D. P. Casasent, “Fast feature selection algorithm for poultry skin tumor detection in hyperspectral data,” J. Food Eng. 94(3-4), 358–365 (2009).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (5)

Proc. SPIE (5)

L. L. Randeberg, I. Baarstad, T. Løke, P. Kaspersen, and L. O. Svaasand, “Hyperspectral imaging of bruised skin,” Proc. SPIE 6078, 60780O (2006).
[Crossref]

G. N. Stamatas, C. J. Balas, and N. Kollias, “Hyperspectral image acquisition and analysis of skin,” Proc. SPIE 4959, 77–83 (2003).
[Crossref]

N. Gat, G. Scriven, J. Garman, M. D. Li, and J. Zhang, “Development of four-dimensional imaging spectrometers (4d-is),” Proc. SPIE 6302, 63020M (2006).
[Crossref]

R. W. Basedow, D. C. Carmer, and M. E. Anderson, “Hydice system: Implementation and performance,” Proc. SPIE 2480, 258–267 (1995).
[Crossref]

M. Yamaguchi, H. Haneishi, H. Fukuda, J. Kishimoto, H. Kanazawa, M. Tsuchida, R. Iwama, and N. Ohyama, “High-fidelity video and still-image communication based on spectral information: natural vision system and its applications,” Proc. SPIE 6062, 60620G (2006).
[Crossref]

Remote Sens. Environ. (1)

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (sips)–interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ. 44(2-3), 145–163 (1993).
[Crossref]

Other (19)

A. Lam, A. Subpa-Asa, I. Sato, T. Okabe, and Y. Sato, “Spectral imaging using basis lights,” in Proceedings of Conference on British Machine Vision Conference (2013).

M. Kitahara, T. Okabe, C. Fuchs, and H. P. Lensch, “Simultaneous estimation of spectral reflectance and normal from a small number of images,” in VISAPP (1), (2015), pp. 303–313.

V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of International Conference on Machine Learning (2010), pp. 807–814.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of International Conference on Computer Vision (2015).

D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proceedings of International Conference on Learning Representations (2015).

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2011), pp. 193–200.

J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of International Conference on Computer Vision (2007), pp. 1–8.

W. M. Porter and H. T. Enmark, “A system overview of the airborne visible/infrared imaging spectrometer (aviris),” in Annual Technical Symposium, (1987), pp. 22–31.

R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single rgb image,” in Proceedings of European Conference on Computer Vision (2014), pp. 186–201.

A. Robles-Kelly, “Single image spectral reconstruction for multimedia applications,” in Proceedings of international conference on Multimedia (2015), pp. 251–260.

A. Lam and I. Sato, “Spectral modeling and relighting of reflective-fluorescent scenes,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2013).

S. Wug Oh, M. S. Brown, M. Pollefeys, and S. Joo Kim, “Do it yourself hyperspectral imaging with everyday digital cameras,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2016).

B. Arad and O. Ben-Shahar, “Sparse recovery of hyperspectral signal from natural rgb images,” in Proceedings of European Conference on Computer Vision (2016) pp. 19–34.

Y. Jia, Y. Zheng, L. Gu, A. Subpa-Asa, A. Lam, Y. Sato, and I. Sato, “From rgb to spectrum for natural scenes via manifold-based mapping,” in Proceedings of International Conference on Computer Vision (2017) pp. 4715–4723.

A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial networks for spatial context-aware spectral image reconstruction from rgb,” in The IEEE International Conference on Computer Vision Workshops, (2017).

Z. Xiong, Z. Shi, H. Li, L. Wang, D. Liu, and F. Wu, “Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections,” in Proceedings of International Conference on Computer Vision - Workshops (2017).

Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu, “Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images,” in The IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018).

S. Nie, L. Gu, Y. Zheng, A. Lam, N. Ono, and I. Sato, “Deeply learned filter response functions for hyperspectral reconstruction,” in Proceedings of Conference on Computer Vision and Pattern Recognition (2018).

Y. Fu, T. Zhang, Y. Zheng, D. Zhang, and H. Huang, “Joint camera spectral sensitivity selection and hyperspectral image recovery,” in Proceedings of European Conference on Computer Vision (2018).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Overview of the proposed spectral reflectance imaging system. First, images under multiple optimal illuminations are captured by RGB/monochrome cameras. Then the learned network recovers spectral reflectance from these captured images.
Fig. 2.
Fig. 2. Overview of our CNN-based method which jointly learns the optimal illumination and the spectral reflectance recovery. The black arrow represents the training stage and the red arrow denotes the testing stage.
Fig. 3.
Fig. 3. The illustration of the $m$-th illumination optimization.
Fig. 4.
Fig. 4. Visual quality comparison for the spectral recovery from two RGB images. The error maps for [29]/Ours$_{m1}$/Ours$_{m2}$ results and the scenes are shown from left to right.
Fig. 5.
Fig. 5. Visual quality comparison for the spectral recovery from multiple monochromatic images. The error maps for [32]/Ours-$s3$/Ours-$s6$/Ours-$s8$/Ours-$d3$/Ours-$d6$/Ours-$d8$ results and the scenes are shown from left to right.
Fig. 6.
Fig. 6. The absolute error between ground truth and recovered results along spectra for all compared methods, corresponding results are shown in Figs. 4 and 5.
Fig. 7.
Fig. 7. The setup for real data and visualized result. (a) Real capture system setup. (b) The optimal illumination spectra. (c) The recovered spectral reflectance results in RGB.
Fig. 8.
Fig. 8. The captured six monochromatic images under six optimal illuminations.
Fig. 9.
Fig. 9. The recovered spectra of four typical patches on a Macbeth Color Checker. These four patches are marked in Fig. 7.
Fig. 10.
Fig. 10. Visual quality comparison for the spectral recovery from a single RGB image. The error maps for RBF/SR/MM/DLCSR /HSCNN+/CNNS/Ours$_{s1}$/Ours$_{s2}$/Ours$_{s3}$ results and the absolute error along spectra are shown.
Fig. 11.
Fig. 11. (a) RMSE results of the recovered spectral reflectance under different numbers of RGB images. (b) The used CSR of the RGB camera.
Fig. 12.
Fig. 12. (a) RMSE results of the recovered spectral reflectance under different numbers of monochromatic images. (b) The used CSR of the monochromatic camera.
Fig. 13.
Fig. 13. The designed optimal illuminations and the corresponding singular values on monochromatic images setting. (a)–(d) show the spectral distribution of optimal illuminations under 3, 4, 5 and 6 input images, respectively. Their corresponding singular values are provided in (e)–(h).

Tables (4)

Tables Icon

Table 1. Quantitative results on multiple RGB images’ setting. $\textrm {Ours}_{m1}$ and $\textrm {Ours}_{m2}$ denote multiplexing 2 optimal illuminations and designing 2 optimal illuminations, respectively.

Tables Icon

Table 2. Quantitative results on multiple monochromatic images’ setting. Ours-${s3}$, Ours-${s6}$ and Ours-${s8}$ denote our method under 3, 6 and 8 optimal multiplexed illuminations. Ours-${d3}$, Ours-${d6}$ and Ours-${d8}$ denote our method under 3, 6 and 8 optimal designed illuminations.

Tables Icon

Table 3. The quantitative results on real data. RMSE results of eight typical patches and the average RMSE of all patches in the ColorChecker are provided. The eight patches are marked in Fig. 7(c).

Tables Icon

Table 4. Quantitative results on the single RGB image setting. Ours$_{s1}\sim$Ours$_{s3}$ denote our methods on spectral recovery without illumination optimization, with optimal illumination multiplexing, and with optimal illumination design, respectively.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

p ( x , y , λ ) = h ( λ ) l ( λ ) s ( x , y , λ ) ,
Y p , m ( x , y ) = c p ( λ ) l m ( λ ) s ( x , y , λ ) d λ ,
Y p , m ( x , y ) = b = 1 B c p ( λ b ) l m ( λ b ) s ( x , y , λ b ) ,
Z = [ Y 1 ; Y 2 ; ; Y M ] .
X j , t = l j S t ,
X t = s t a c k ( X 1 , t ; X 2 , t ; ; X J , t ) ,
L s ( V s ) = t = 1 T C ( V s X t ) Z t s 2 2 , s . t . V s 0 ,
L d ( V d ) = t = 1 T C ( V d S t ) Z t d 2 2 + η G V d 2 2 , s . t . V d 0 ,
F k = ReLU ( W k stack ( F k 1 , Z ) + b k ) ,
L ( V s , Θ ) = τ 1 L s ( V s ) + t = 1 T f ( Z t s , Θ ) S t 2 2 , s . t . V s 0 ,
L ( V d , Θ ) = τ 2 L d ( V d ) + t = 1 T f ( Z t d , Θ ) S t 2 2 , s . t . V d 0.
Y p , 3 ( x , y ) = b = 1 B c p ( λ b ) l 3 ( λ b ) s ( x , y , λ b ) = b = 1 B c p ( λ b ) [ α 1 l 1 ( λ ) + α 2 l 2 ( λ ) ] s ( x , y , λ b ) = α 1 b = 1 B c p ( λ b ) l 1 ( λ b ) s ( x , y , λ b ) + α 2 b = 1 B c p ( λ b ) l 2 ( λ b ) s ( x , y , λ b ) = α 1 Y p , 1 + α 2 Y p , 2 .

Metrics