Abstract

This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object’s depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Optical-aberrations-corrected light field re-projection for high-quality plenoptic imaging

Yanqin Chen, Xin Jin, and Bo Xiong
Opt. Express 28(3) 3057-3072 (2020)

Point spread function for diffuser cameras based on wave propagation and projection model

Xin Jin, David Mao San Wei, and Qionghai Dai
Opt. Express 27(9) 12748-12761 (2019)

Geometry parameter calibration for focused plenoptic cameras

Xin Jin, Xufu Sun, and Chuanpu Li
Opt. Express 28(3) 3428-3441 (2020)

References

  • View by:
  • |
  • |
  • |

  1. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
    [Crossref]
  2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).
  3. R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).
  4. M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
    [Crossref]
  5. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015).
    [Crossref] [PubMed]
  6. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
    [Crossref]
  7. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
    [Crossref]
  8. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).
  9. A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).
  10. T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.
  11. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
    [Crossref] [PubMed]
  12. S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.
  13. S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
    [Crossref]
  14. S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
    [Crossref]
  15. T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
    [Crossref] [PubMed]
  16. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
    [Crossref]
  17. M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).
  18. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
    [Crossref]
  19. Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
    [Crossref]
  20. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
    [Crossref] [PubMed]
  21. R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
    [Crossref]
  22. T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg .
  23. X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390.
    [Crossref]

2015 (1)

2013 (3)

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013).
[Crossref] [PubMed]

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

2012 (1)

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

2011 (1)

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

2010 (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

2006 (1)

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

1972 (1)

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Adam, A.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Andalman, A.

Bando, Y.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Berkner, K.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Bishop, T. E.

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Broxton, M.

Cohen, N.

Deisseroth, K.

Favaro, P.

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Footer, M.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Georgiev, T.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

Grosenick, L.

Häusler, G.

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Holtzman, H.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Horowitz, M.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Kuthirummal, S.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

Lam, E. Y.

Levoy, M.

Lumsdaine, A.

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Nagahara, H.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

Nayar, S. K.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

Ng, R.

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Raskar, R.

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

Shroff, S.

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Veeraraghavan, A.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

Wang, J. Y. A.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

Yang, S.

Yokoya, R.

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

Zanetti, S.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

Zhou, C.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

ACM Trans. Graph. (2)

M. Levoy, R. Ng, A. Adam, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).
[Crossref]

Y. Bando, H. Holtzman, and R. Raskar, “Near-invariant blur for depth and 2D motion via time-varying light field analysis,” ACM Trans. Graph. 32(2), 539–555 (2013).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible Depth of Field Photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 58–71 (2011).
[Crossref] [PubMed]

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012).
[Crossref] [PubMed]

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992).
[Crossref]

J. Electron. Imaging (1)

T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 1–28 (2010).

J. Opt. Soc. Am. A (1)

Opt. Commun. (1)

G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6(1), 38–42 (1972).
[Crossref]

Opt. Express (1)

Proc. SPIE (1)

S. Shroff and K. Berkner, “Wave analysis of a plenoptic system and its applications,” Proc. SPIE 8667, 86671L (2013).
[Crossref]

Other (13)

S. Shroff and K. Berkner, “Plenoptic System Response and Image Formation,” in Imaging and Applied Optics, OSA Technical Digest (online) (Optical Society of America, 2013), paper JW3B.1.

S. Shroff and K. Berkner, “High Resolution Image Reconstruction for Plenoptic Imaging Systems using System Response,” in Imaging and Applied Optics Technical Papers, OSA Technical Digest (online) (Optical Society of America (2012)), paper CM2B.2.
[Crossref]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP) (2009), pp.1–9.
[Crossref]

M. Turola, “Investigation of plenoptic imaging systems: a wave optics approach,” PhD dissertation, City University London (2016).

R. Yokoya and S. K. Nayar, “Extended Depth of Field Catadioptric Imaging Using Focal Sweep,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015), pp. 3505–3513.
[Crossref]

T. Georgiev, “Plenoptic 2.0 data: Photographer,” http://www.tgeorgiev.net/Jeff.jpg .

X. Marichal, W. Ma, and H. Zhang, “Blur determination in the compressed domain using DCT information,” in Proceedings of 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe (1999), pp. 386–390.
[Crossref]

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.
[Crossref]

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (ICCP, 2009), pp. 1–8.
[Crossref]

A. Lumsdaine and T. Georgiev, “Full resolution light field rendering,” Technical report, Adobe Systems (2008).

T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic 2.0 cameras,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009, OSA Technical Digest (CD) (Optical Society of America) (2009), paper STuA6.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenopic Camera,” Technical Report, Stanford University (2005).

R. Ng, “Digital light field photography,” Ph.D. thesis, Stanford University (2006).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 (a) Optical structure of plenoptic camera 2.0; and (b) raw image and magnifications of three micro images [22].
Fig. 2
Fig. 2 The schematic diagram of the self-built imaging system.
Fig. 3
Fig. 3 The prototype of the self-built imaging system: (a) the top view; and (b) magnification of the subsystem in (a).
Fig. 4
Fig. 4 PSFs calculated at different object’s depths d1 by Eq. (21): (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm and (e) d1 = 110mm.
Fig. 5
Fig. 5 PSF calculated at different sensor positions d3.2 by Eq. (21): (a) d3.2 = 10mm; (b) d3.2 = 12mm; (c) d3.2 = 15mm; (d) d3.2 = 18mm; and (e) d3.2 = 20mm.
Fig. 6
Fig. 6 PSFs with a single microlens: (a)-(d): PSFs obtained by the real imaging systems with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively; (e)-(h): simulated PSFs with d1 = 75mm, d1 = 90mm, d1 = 100mm, and d1 = 125mm, respectively.
Fig. 7
Fig. 7 PSFs with a microlens array: (a) the prototype of the real imaging system with a microlens array; (b) experimental PSF captured as d1 = 90mm; (c) the enlarged view of the region lined in red in (b); (d) simulated PSF as d1 = 90mm.
Fig. 8
Fig. 8 Two objects used in the experiments: (a) the first object; (b) the second object. Red lines describe the lengths calculated in x and y directions.
Fig. 9
Fig. 9 The first row and the second row are the real imaging results and the simulated results of the object in Fig. 8(a), respectively. The third row and the forth row are the real imaging results and the simulated results of the object in Fig. 8 (b), respectively. Columns (a) to (f) corresponds to d1 = 75mm, d1 = 90mm, d1 = 100mm, d1 = 125mm, d1 = 150mm and d1 = 175mm, respectively.
Fig. 10
Fig. 10 USAF resolution target used in the real imaging systems. Red lines describe the lengths calculated in x and y directions.
Fig. 11
Fig. 11 The first row and the second row are the real imaging results and the simulated results of the USAF resolution target in Fig. 10, respectively. Columns (a)-(c) corresponds to d1 = 280mm, d1 = 300mm, and d1 = 330mm.
Fig. 12
Fig. 12 FSPSFs with depth d1 changes: (a) d1 = 70mm; (b) d1 = 80mm; (c) d1 = 90mm; (d) d1 = 100mm; and (e) d1 = 110mm.
Fig. 13
Fig. 13 x-cross section at y = 0 mm of FSPSFs for objects with different object’s depths d1.

Tables (3)

Tables Icon

Table 1 Geometric parameters of the self-built imaging system.

Tables Icon

Table 2 The variation settings of object’s depth and sensor position.

Tables Icon

Table 3 Geometric parameters of the imaging system with a microlens array.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

1 d 1 + 1 d 2 = 1 f 1 , 1 d 3.1 + 1 d 3.2 = 1 f 2 ,
U( x main , y main )= + U( x 0 , y 0 ) h 11 ( x main , y main , x 0 , y 0 )d x 0 d y 0 .
U( x main , y main )= exp(ik d 1 ) iλ d 1 exp[ ik 2 d 1 ( x main 2 + y main 2 )] + U( x 0 , y 0 ) ×exp[ ik 2 d 1 ( x 0 2 + y 0 2 )]exp[ ik d 1 ( x 0 x main + y 0 y main )]d x 0 d y 0 ,
h 11 ( x main , y main , x 0 , y 0 )= exp(ik d 1 ) iλ d 1 exp{ ik 2 d 1 [ ( x main x 0 ) 2 + ( y main y 0 ) 2 ]}.
U( x 1 , y 1 )= + U( x main , y main ) h 12 ( x 1 , y 1 , x main , y main )d x main d y main ,
U( x 1 , y 1 )= exp(ik d 2 ) iλ d 2 exp[ ik 2 d 2 ( x 1 2 + y 1 2 )] + U( x main , y main ) × t main ( x main , y main )exp[ ik 2 d 2 ( x main 2 + y main 2 )] ×exp[ ik d 2 ( x main x 1 + y main y 1 )]d x main d y main ,
t main ( x main , y main )= P 1 ( x main , y main )exp[ ik 2 f 1 ( x main 2 + y main 2 )],
h 12 ( x 1 , y 1 , x main , y main )= exp(ik d 2 ) iλ d 2 t main ( x main , y main ) ×exp{ ik 2 d 2 [ ( x main x 1 ) 2 + ( y main y 1 ) 2 ]}.
U( x 1 , y 1 )= + U( x 0 , y 0 ) h 11 ( x main , y main , x 0 , y 0 ) × h 12 ( x 1 , y 1 , x main , y main )d x main d y main d x 0 d y 0 .
h 1 ( x 1 , y 1 x 0 , y 0 )= + h 11 ( x main , y main , x 0 , y 0 ) h 12 ( x 1 , y 1 , x main , y main )d x main d y main = exp[ ik( d 1 + d 2 ) ] - λ 2 d 1 d 2 + t main ( x main , y main ) ×exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( y 1 y main ) 2 ]}d x main d y main .
U(x,y)= + U( x 1 , y 1 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 ,
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro ,
h 21 ( x micro , y micro , x 1 , y 1 )= exp(ik d 3.1 ) iλ d 3.1 exp[ ik 2 d 3.1 ( x micro 2 + y micro 2 )] ×exp[ ik 2 d 3.1 ( x 1 2 + y 1 2 )]exp[ ik d 3.1 ( x 1 x micro + y 1 y micro )] = exp(ik d 3.1 ) iλ d 3.1 exp{ ik 2 d 3.1 [ ( x micro x 1 ) 2 + ( y micro y 1 ) 2 ]},
h 22 (x,y, x micro , y micro )= exp(ik d 3.2 ) iλ d 3.2 t micro ( x micro , y micro ) ×exp { ik 2 d 3.2 [(x x micro )) 2 +(y y micro )) 2 ]},
t micro ( x micro , y micro )= P 2 ( x micro , y micro )exp[ ik 2 f 2 ( x micro 2 + y micro 2 )],
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro = exp[ ik( d 3.1 + d 3.2 ) ] - λ 2 d 3.1 d 3.2 + t micro ( x micro , y micro ) ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro .
h 22 (x,y, x micro , y micro )= exp(ik d 3.2 ) iλ d 3.2 m n t micro ( x micro mD, y micro nD) ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]},
h 2 (x,y, x 1 , y 1 )= + h 21 ( x micro , y micro , x 1 , y 1 ) h 22 (x,y, x micro , y micro )d x micro d y micro = exp[ ik( d 3.1 + d 3.2 ) ] - λ 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro .
U(x,y)= + U( x 0 , y 0 ) h 1 ( x 1 , y 1 , x 0 , y 0 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 d x 0 d y 0 .
U(x,y)= + U( x 0 , y 0 )h(x,y, x 0 , y 0 )d x 0 d y 0 ,
h(x,y, x 0 , y 0 )= + h 1 ( x 1 , y 1 , x 0 , y 0 ) h 2 (x,y, x 1 , y 1 )d x 1 d y 1 = exp[ik( d 1 + d 2 + d 3.1 + d 3.2 )] λ 4 d 1 d 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) . ×exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( x 1 y main ) 2 ]}d x main d y main d x 1 d y 1
FSPSF= PSFdt .
FSPSF= PSF( d 3.2 )dt .
FSPSF= PSF( d 3.2 )d( 1 v d 3.2 d 0 ) = 1 v PSF( d 3.2 )d d 3.2 .
FSPSF= 1 v h (x,y, x 0 , y 0 )d d 3.2 = 1 v exp[ik( d 1 + d 2 + d 3.1 + d 3.2 )] λ 4 d 1 d 2 d 3.1 d 3.2 m n + t micro ( x micro mD, y micro nD) exp{ ik 2 d 3.1 [ ( x 1 x micro ) 2 + ( y 1 y micro ) 2 ]} ×exp{ ik 2 d 3.2 [ (x x micro ) 2 + (y y micro ) 2 ]}d x micro d y micro × t main ( x main , y main )exp{ ik 2 d 1 [ ( x 0 x main ) 2 + ( y 0 y main ) 2 ]} ×exp{ ik 2 d 2 [ ( x 1 x main ) 2 + ( x 1 y main ) 2 ]}d x main d y main d x 1 d y 1 d d 3.2 .

Metrics