Abstract

Multispectral light field acquisition is challenging due to the increased dimensionality of the problem. In this paper, inspired by anaglyph theory (i.e. the ability of human eyes to synthesize colored stereo perception from color-complementary (such as red and cyan) views), we propose to capture the multispectral light field using multiple cameras with different wide band filters. A convolutional neural network is used to extract the joint information of different spectral channels and to pair the cross-channel images. In our experiment, results on both synthetic data and real data captured by our prototype system validate the effectiveness and accuracy of proposed method.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Camera array based light field microscopy

Xing Lin, Jiamin Wu, Guoan Zheng, and Qionghai Dai
Biomed. Opt. Express 6(9) 3179-3189 (2015)

Dictionary-based light field acquisition using sparse camera array

Xuan Cao, Zheng Geng, and Tuotuo Li
Opt. Express 22(20) 24081-24095 (2014)

Combining transverse field detectors and color filter arrays to improve multispectral imaging systems

Miguel A. Martínez, Eva M. Valero, Javier Hernández-Andrés, Javier Romero, and Giacomo Langfelder
Appl. Opt. 53(13) C14-C24 (2014)

References

  • View by:
  • |
  • |
  • |

  1. N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056(1), 50–64 (2000).
    [Crossref]
  2. K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).
  3. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
    [Crossref]
  4. M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: Experimental calibration and reconstruction results,” Appl. Opt. 34(22), 4817 (1995).
    [Crossref] [PubMed]
  5. D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A (2006).
    [Crossref]
  6. A. Mrozack, D. L. Marks, and D. J. Brady, “Coded aperture spectroscopy with denoising through sparsity,” Opt. Express 20(3), 2297–2309 (2012).
    [Crossref] [PubMed]
  7. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive Holography,” Opt. Express 17(15), 13040–13049 (2009).
    [Crossref] [PubMed]
  8. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
    [Crossref] [PubMed]
  9. H. Rueda, H. Arguello, and G. R. Arce, “Dual-ARM VIS/NIR compressive spectral imager,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2006), pp. 2572–2576.
  10. X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
    [Crossref]
  11. J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
    [Crossref]
  12. L. Mcmillan and G. Bishop, “Plenoptic modeling: an image-based rendering system,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM29(5), 39–46 (1995).
  13. M. Landy and J. Movshon, “The plenoptic function and the elements of early vision,” MIT Press1, 3–20 (1997).
  14. M. Levoy, “Light fields and computational imaging,” Computer 39(8) 46–55 (2006).
    [Crossref]
  15. S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).
  16. D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).
  17. B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
    [Crossref]
  18. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992).
    [Crossref]
  19. L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
    [Crossref]
  20. J. P. Lewis, “Fast normalized cross-correlation,” Vision Interface 10(1), 120–123 (1995).
  21. X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.
  22. E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.
  23. J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1592–1599.
  24. J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” J. Mach. Learn. Res. 17, 1–32 (2016).
  25. Y. Lecun, “Learning Invariant Feature Hierarchies,” Euro. Conf. Comput. Vision 2012, pp. 496–505.
  26. M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3061–3070.
  27. M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).
  28. H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  29. D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).
  30. J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  31. PointGrey, “Grasshopper3 5.0 MP Color USB3 Vision,” https://www.ptgrey.com/grasshopper3-50-mp-color-usb3-vision-sony-pregius-imx250 .
  32. J. Heikkila and O. Silvéln, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.
  33. Y. S. Kang, C. Lee, and Y. S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 61–64.
  34. C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
    [Crossref]
  35. Autodesk 3ds Max, “3D computer graphics program”, http://www.autodesk.com/products/3ds-max/overview .
  36. Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

2016 (3)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
[Crossref]

J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” J. Mach. Learn. Res. 17, 1–32 (2016).

2015 (1)

M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).

2014 (2)

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

2012 (1)

2011 (1)

X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
[Crossref]

2009 (1)

2006 (3)

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

M. Levoy, “Light fields and computational imaging,” Computer 39(8) 46–55 (2006).
[Crossref]

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A (2006).
[Crossref]

2005 (1)

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

2003 (1)

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

2000 (1)

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056(1), 50–64 (2000).
[Crossref]

1995 (2)

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992).
[Crossref]

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992).
[Crossref]

Aldinger, K.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Andrew, A.

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Arce, G. R.

H. Rueda, H. Arguello, and G. R. Arce, “Dual-ARM VIS/NIR compressive spectral imager,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2006), pp. 2572–2576.

Arguello, H.

H. Rueda, H. Arguello, and G. R. Arce, “Dual-ARM VIS/NIR compressive spectral imager,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2006), pp. 2572–2576.

Azuma, D.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Barnard, K. J.

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Bishop, G.

L. Mcmillan and G. Bishop, “Plenoptic modeling: an image-based rendering system,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM29(5), 39–46 (1995).

Brady, D. J.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

A. Mrozack, D. L. Marks, and D. J. Brady, “Coded aperture spectroscopy with denoising through sparsity,” Opt. Express 20(3), 2297–2309 (2012).
[Crossref] [PubMed]

D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive Holography,” Opt. Express 17(15), 13040–13049 (2009).
[Crossref] [PubMed]

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A (2006).
[Crossref]

Brown, M. S.

Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

Cao, X

X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
[Crossref]

Cao, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

Carin, L.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

Choi, K.

Cohen, M.

S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).

Curless, B.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Dai, Q.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

Dereniak, E.

Descour, M.

Du, H

X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
[Crossref]

Duchamp, T.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Gat, N.

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056(1), 50–64 (2000).
[Crossref]

Gehm, M. E.

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A (2006).
[Crossref]

Geiger, A.

M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).

M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3061–3070.

Gortler, S.

S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).

Grossberg, M. D.

J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Grzeszczuk, R.

S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).

Heikkila, J.

J. Heikkila and O. Silvéln, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.

Heipke, C.

M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).

Hirakawa, K.

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
[Crossref]

Hirschmüller, H.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Ho, Y. S.

Y. S. Kang, C. Lee, and Y. S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 61–64.

Horisaki, R.

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Jia, J.

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
[Crossref]

X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Kang, Y. S.

Y. S. Kang, C. Lee, and Y. S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 61–64.

Kitajima, Y.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Kowalczuk, J.

E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.

Krathwohl, G.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Landy, M.

M. Landy and J. Movshon, “The plenoptic function and the elements of early vision,” MIT Press1, 3–20 (1997).

Lawrence, K. C.

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

LeCun, Y.

J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” J. Mach. Learn. Res. 17, 1–32 (2016).

Y. Lecun, “Learning Invariant Feature Hierarchies,” Euro. Conf. Comput. Vision 2012, pp. 496–505.

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1592–1599.

Lee, C.

Y. S. Kang, C. Lee, and Y. S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 61–64.

Lee, M. H.

J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Levoy, M.

M. Levoy, “Light fields and computational imaging,” Computer 39(8) 46–55 (2006).
[Crossref]

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Lewis, J. P.

J. P. Lewis, “Fast normalized cross-correlation,” Vision Interface 10(1), 120–123 (1995).

Lim, S.

Lin, S.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

Lin, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014).
[Crossref] [PubMed]

Liu, Y.

Ma, C.

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

Mao, C.

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

Marc, L.

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Mark, H.

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Marks, D. L.

Matthew, F.

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Mcmillan, L.

L. Mcmillan and G. Bishop, “Plenoptic modeling: an image-based rendering system,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM29(5), 39–46 (1995).

Menze, M.

M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).

M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3061–3070.

Mittek, M.

E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.

Movshon, J.

M. Landy and J. Movshon, “The plenoptic function and the elements of early vision,” MIT Press1, 3–20 (1997).

Mrozack, A.

Nayar, S. K.

J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Nesic, N.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Nguyen,

Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

Park, B.

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

Park, J. I.

J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Perez, L. C.

E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.

Prasad, D. K.

Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

Psota, E. T.

E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.

Rang, M. H.

Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

Ren, N.

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Rueda, H.

H. Rueda, H. Arguello, and G. R. Arce, “Dual-ARM VIS/NIR compressive spectral imager,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2006), pp. 2572–2576.

Salesin, D. H.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Scharstein, D.

H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Shen, X.

X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.

Silvéln, O.

J. Heikkila and O. Silvéln, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.

Stuetzle, W.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Szeliski, R.

S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).

Talvala, E. V.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Tong, X

X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
[Crossref]

Tong, X.

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992).
[Crossref]

Wang, X.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Westling, P.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

Wetzstein, G.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

Windham, W. R.

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

Wood, D.

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Xu, L.

X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.

Yuan, X.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

Yue, T.

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

Zbontar, J.

J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” J. Mach. Learn. Res. 17, 1–32 (2016).

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1592–1599.

Zhang, Q.

X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.

ACM Trans. on Graphics (2)

B. Wilburn, N. Joshi, V. Vaish, E. V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. on Graphics 24(3), 765–776 (2005).
[Crossref]

L. Marc, N. Ren, A. Andrew, F. Matthew, and H. Mark, “Light field microscopy,” ACM Trans. on Graphics 25(3), 924–934 (2006).
[Crossref]

Appl. Opt. (1)

Computer (1)

M. Levoy, “Light fields and computational imaging,” Computer 39(8) 46–55 (2006).
[Crossref]

IEEE Sig. Proc. Mag. (1)

X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Sig. Proc. Mag. 33(5), 95–108 (2016).
[Crossref]

IEEE Trans. Image Process. (1)

J. Jia, K. J. Barnard, and K. Hirakawa, “Fourier Spectral Filter Array for Optimal Multispectral Imaging,” IEEE Trans. Image Process. 25(4), 1 (2016).
[Crossref]

IEEE Trans. on Pattern Analysis and Machine Intelligence (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992).
[Crossref]

IEEE Trans. Pat. Ana. Machine Intell. (1)

X Cao, H Du, X Tong, and et al., “A Prism-Mask System for Multispectral Video Acquisition,” IEEE Trans. Pat. Ana. Machine Intell. 33(12), 2423–2435 (2011).
[Crossref]

Int. J. Comput. Vis. (1)

C. Ma, X. Cao, X. Tong, Q. Dai, and S. Lin, “Acquisition of high spatial and spectral resolution video with a hybrid camera system,” Int. J. Comput. Vis. 110(2), 141–155 (2014).
[Crossref]

J. Mach. Learn. Res. (1)

J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” J. Mach. Learn. Res. 17, 1–32 (2016).

Opt. Express (2)

Opt. Lett. (1)

Proc. SPIE (2)

D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” Proc. SPIE 6246, 62460A (2006).
[Crossref]

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056(1), 50–64 (2000).
[Crossref]

Remote Sens. Spatial Inf. Sci. (1)

M. Menze, C. Heipke, and A. Geiger, “Joint 3dD estimation of vehicles and scene flow,” ISPRS Ann. Photogramm,” Remote Sens. Spatial Inf. Sci.,  II-3-W5, 427–434 (2015).

Trans. Ame. Soc. of Agril. Engg. (1)

K. C. Lawrence, B. Park, W. R. Windham, and C. Mao, “Calibration of a pushbroom hyperspectral imaging system for agricultural inspection,” Trans. Ame. Soc. of Agril. Engg. 46(2), 513 (2003).

Vision Interface (1)

J. P. Lewis, “Fast normalized cross-correlation,” Vision Interface 10(1), 120–123 (1995).

Other (18)

X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectral registration for natural images,” Euro. Conf. Comput. Vision 2014, pp. 309–324.

E. T. Psota, J. Kowalczuk, M. Mittek, and L. C. Perez, “MAP Disparity Estimation Using Hidden Markov Trees,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015), pp. 2219–2227.

J. Zbontar and Y. LeCun, “Computing the stereo matching cost with a convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 1592–1599.

H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for stereo matching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” German Conference on Pattern Recognition8753, 31–42 (2014).

J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

PointGrey, “Grasshopper3 5.0 MP Color USB3 Vision,” https://www.ptgrey.com/grasshopper3-50-mp-color-usb3-vision-sony-pregius-imx250 .

J. Heikkila and O. Silvéln, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.

Y. S. Kang, C. Lee, and Y. S. Ho, “An efficient rectification algorithm for multi-view images in parallel camera array,” 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2008), pp. 61–64.

H. Rueda, H. Arguello, and G. R. Arce, “Dual-ARM VIS/NIR compressive spectral imager,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2006), pp. 2572–2576.

L. Mcmillan and G. Bishop, “Plenoptic modeling: an image-based rendering system,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM29(5), 39–46 (1995).

M. Landy and J. Movshon, “The plenoptic function and the elements of early vision,” MIT Press1, 3–20 (1997).

S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The Lumigraph,” in Proceedings of Conference on Computer Graphics and Interactive Techniques. ACM96, 43–54 (2001).

D. Wood, D. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle, “Surface light fields for 3D photography,” in Proceedings of the Conference on Computer Graphics and Interactive Techniques. ACM287–296 (2000).

Y. Lecun, “Learning Invariant Feature Hierarchies,” Euro. Conf. Comput. Vision 2012, pp. 496–505.

M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015), pp. 3061–3070.

Autodesk 3ds Max, “3D computer graphics program”, http://www.autodesk.com/products/3ds-max/overview .

Nguyen, M. H. Rang, D. K. Prasad, and M. S. Brown, “Training-Based Spectral Reconstruction from a Single RGB Image,” Euro. Conf. Comput. Vision 2014, 186–201.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 This image shows an overview of our system. Our system introduces the stereo matching method with convolutional neural network and exploits the different spectral sensitivities of the filter array to reconstruct multispectral light field through the CNN-based heterogeneous stereo matching and spectral demultiplexing.
Fig. 2
Fig. 2 Camera array configuration.
Fig. 3
Fig. 3 Examples of predicted disparity maps on the KITTI 2015 dataset [26, 27] using our proposed method with different channel inputs (the even rows), as well as the results of Zbontar et.al. [24] with full-channel inputs (the odd rows). note that objects closer to the camera have larger disparities than objects farther away, with warmer colors representing larger values of disparity and smaller values of depth. When taking the single channel as the input, we try different pairs of RGB channels, and they all get very high similarities.
Fig. 4
Fig. 4 Camera array with heterogeneous wide-band color filters. (a) shows eight qualified plastic filters used in our prototype camera-array system; (b) respectively illustrates their spectral sensitivities, which provide enough independent measurements of incoming light spectrum. The standard spectral response of the Point Grey GS3-U3-51S5C-C camera sensor array (c) [31] used in our prototype system is shown in (d).
Fig. 5
Fig. 5 (a) Simulated color images captured by 2 × 4 camera arrays with filters in front of them. (b) Simulated image registration with upper left image as the reference image. We also measure the Peak-Signal-to-Noise-Ratio(PSNR) for images in different viewpoints, and an increasing distance between target camera and reference camera decrease the accuracy of image registration, hence the multispectral reconstruction quality.
Fig. 6
Fig. 6 The comparison of three commonly used optimization methods. These three optimization methods converge to the same solution, hence share the same accuracy. Furthermore, the running time is fastest for conjugate gradient method and slowest for gradient descent method with iteration steps in the same order of magnitude.
Fig. 7
Fig. 7 Reconstructed multispectral channels of the first (top left) view of our eight camera array system. We select six single-spectral reflectance from all 24 reconstructed channels and compare the results with simulated ground truth reflectance.
Fig. 8
Fig. 8 (a) PSNR of simulated CAT image with different parameter σ of additional Gaussian noise. (b)illustrates both reconstructed and GroundTruth spectral reflectance curves of two selected points of (a), and pseudo-color images in chosen spectrum marked by dotted line (586nm for point A, 618nm for point B).
Fig. 9
Fig. 9 Testing results of our proposed method on the light field dataset Toy HumveeandSoldier captured by Computer Graphics Laboratory in Stanford University. We choose a part of 2 × 4 image arrays from the whole 256 views on a 16 × 16 grid which have been calibrated already.(a) shows simulated light field color images with filters,(b) simulates image registration of the first(top left)view of the eight images. (c) illustrates four single-spectral reflectance images from all 24 reconstructed channels warped to all the views for 100 × 100 red patches in (b).
Fig. 10
Fig. 10 Verification experiment using a Macbeth color chart. The results from our method and the Ground Truth curves of the color checker are compared. We randomly choose six patches from all 24 color patches of the colorboard and illustrate their both reconstructed and standard spectral reflectance curves of 24 channels from 450nm to 634nm, with an interval of 8nm.
Fig. 11
Fig. 11 Real color images captured by our proposed 2 4 camera array system with heterogeneous wideband filters. These two scenes are both×captured under indoor iodinetungsten illumination and we can obtain the illumination spectra through capturing the standard white board. we also randomly select several points with different colors and illustrate their reconstructed 24-channel single-spectral reflectance curves in the rightmost column.
Fig. 12
Fig. 12 Multispectral image reconstruction of various light field datasets captured by our own system under indoor iodine-tungsten illuminations and the detail results of the same patch in all eight views. We respectively select two single-spectral reflectance from all 24 reconstructed multispectral channels for each scenario, which are rendered as RGB images using the spectral sensitivities of the Point Grey GS3-U3-51S5C-C camera sensor.

Tables (1)

Tables Icon

Table 1 Evaluating multispectral reflectance reconstruction errors from three groups of 2 × 4 simulated light field datasets.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

p m , k ( x ) = Ω s ( λ , x ) c k c a m e r a ( λ ) c m f i l t e r ( λ ) d λ ,
C = [ c 1 , 1 c 1 , 2 c 3 × M , N c 2 , 1 c 2 , 2 c 3 × M , N c 3 × M , 1 c 3 × M , 2 c 3 × M , N ] ,
p m , k = i = 1 N C 3 × ( m 1 ) + k , i s .
P = Cs ,
s ^ = arg min s P Cs 2 ,
s ^ = arg min s P Cs 2 + λ s 2 s . t . s ( i ) 0 f o r a l l i ,
E C = s 1 + Δ s 1 s 0 + Δ s 0 s 1 s 0 s 1 s 0 = Δ s 1 s 0 Δ s 0 s 1 s 0 2 s 1 s 0 = | Δ s 1 s 1 | + | Δ s 0 s 0 |

Metrics