Abstract

The circular fisheye lens exhibits an approximately 180° angular field-of-view (FOV), which is much larger than that of an ordinary lens. Thus, images captured with a circular fisheye lens are distributed non-uniformly with spherical deformation. Along with the fast development of deep neural networks for normal images, how to apply it to achieve intelligent image processing for a circular fisheye lens is a new task of significant importance. In this paper, we take the aurora images captured with all-sky-imagers (ASI) as a typical example. By analyzing the imaging principle of ASI and the magnetic characteristics of the aurora, a deformed region division (DRD) scheme is proposed to replace the region proposals network (RPN) in the advanced mask regional convolutional neural network (Mask R-CNN) framework. Thus, each image can be regarded as a “bag” of deformed regions represented with CNN features. After clustering all CNN features to generate a vocabulary, each deformed region is quantified to its nearest center for indexing. On the stage of an online search, a similarity score is computed by measuring the distances between regions in the query image and all regions in the data set, and the image with the highest value is outputted as the top rank search result. Experimental results show that the proposed method greatly improves the search accuracy and efficiency, demonstrating that it is a valuable attempt of intelligent image processing for circular fisheye lenses.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror

Weiming Li and Y. F. Li
Opt. Express 19(7) 5855-5867 (2011)

Open-source, machine and deep learning-based automated algorithm for gestational age estimation through smartphone lens imaging

Arjun D. Desai, Chunlei Peng, Leyuan Fang, Dibyendu Mukherjee, Andrew Yeung, Stephanie J. Jaffe, Jennifer B. Griffin, and Sina Farsiu
Biomed. Opt. Express 9(12) 6038-6052 (2018)

Learning Siamese networks for laser vision seam tracking

Yanbiao Zou, Jinchao Li, Xiangzhi Chen, and Rui Lan
J. Opt. Soc. Am. A 35(11) 1805-1813 (2018)

References

  • View by:
  • |
  • |
  • |

  1. F. Sigernes, J. M. Holmes, M. Dyrland, D. A. Lorentzen, T. Svenøe, K. Heia, T. Aso, S. Chernouss, and C. S. Deehr, “Sensitivity calibration of digital colour cameras for auroral imaging,” Opt. Express 16(20), 15623–15632 (2008).
    [Crossref] [PubMed]
  2. C. Goenka, J. Semeter, J. Noto, J. Baumgardner, J. Riccobono, M. Migliozzi, H. Dahlgren, R. Marshall, S. Kapali, M. Hirsch, D. Hampton, and H. Akbari, “LiCHI - Liquid Crystal Hyperspectral Imager for simultaneous multispectral imaging in aeronomy,” Opt. Express 23(14), 17772–17782 (2015).
    [Crossref] [PubMed]
  3. S. B. Mende, R. H. Eather, and E. K. Aamodt, “Instrument for the monochromatic observation of all sky auroral images,” Appl. Opt. 16(6), 1691–1700 (1977).
    [Crossref] [PubMed]
  4. X. Yang, X. Gao, and Q. Tian, “Polar embedding for aurora image retrieval,” IEEE Trans. Image Process. 24(11), 3332–3344 (2015).
    [Crossref] [PubMed]
  5. J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” in Proc. IEEE Int. Conf. Comput. Vis. (2003), pp. 1470–1477.
  6. K. He, G. Gkioxari, P. Doll’ar, and R. Girshick, “Mask R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (2017), pp. 2980–2988.
  7. K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004).
    [Crossref]
  8. L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” Int. J. Image Process. 3(4), 143–152 (2009).
  9. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
    [Crossref]
  10. S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), pp. 2169–2178.
    [Crossref]
  11. D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), 2161–2168.
  12. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
    [Crossref]
  13. H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
    [Crossref]
  14. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
    [Crossref]
  15. H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in Proc. Eur. Conf. Comput. Vis. (2008), pp. 304–317.
    [Crossref]
  16. Y. Xia, K. He, F. Wen, and J. Sun, “Joint inverted indexing,” in Proc. IEEE Int. Conf. Comput. Vis. (2013), pp. 3416–3423.
  17. L. Zheng, S. Wang, W. Zhou, and Q. Tian, “Bayes merging of multiple vocabularies for scalable image retrieval,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 1963–1970.
    [Crossref]
  18. W. Zhou, H. Li, J. Sun, and Q. Tian, “Collaborative index embedding for image retrieval,” IEEE Trans. Patt. Anal. Mach. Intell. 99, 1 (2017).
    [Crossref]
  19. L. Zheng, Y. Yang, and Q. Tian, “SIFT meets CNN: A decade survey of instance retrieval,” IEEE Trans. Patt. Anal. Mach. Intell. 99, 1 (2017).
    [Crossref]
  20. T. Cohen, M. Geiger, and M. Welling, “Convolutional networks for spherical signals,” arXiv preprint arXiv:1709.04893 (2017)
  21. M. Wu, T. Leng, L. de Sisternes, D. L. Rubin, and Q. Chen, “Automated segmentation of optic disc in SD-OCT images and cup-to-disc ratios quantification by patch searching-based neural canal opening detection,” Opt. Express 23(24), 31216–31229 (2015).
    [Crossref] [PubMed]
  22. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Adv. Neural Inform. Process. Syst. (2012), pp. 1097–1105.
  23. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent. (2015).
  24. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.
  25. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 5987–5995.
    [Crossref]
  26. Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
    [Crossref]
  27. L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
    [Crossref]
  28. J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
    [Crossref]
  29. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
    [Crossref]
  30. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
    [Crossref] [PubMed]
  31. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
    [Crossref]
  32. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.
  33. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), 6517–6525.
  34. G. Tolias, R. Sicre, and H. Jegou, “Particular object retrieval with integral maxpooling of CNN activations,” in Proc. Int. Conf. Learn. Represent. (2016), pp. 1–12.
  35. T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.
  36. T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

2017 (1)

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

2016 (1)

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

2015 (3)

2013 (1)

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

2009 (1)

L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” Int. J. Image Process. 3(4), 143–152 (2009).

2008 (1)

2004 (1)

K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004).
[Crossref]

2002 (1)

T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
[Crossref]

1987 (1)

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
[Crossref]

1977 (1)

Aamodt, E. K.

Akbari, H.

Anguelov, D.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Aso, T.

Baumgardner, J.

Belongie, S.

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

Berg, A. C.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Chen, Q.

Chernouss, S.

Chum, O.

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

Cohen, T. S.

T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

Dahlgren, H.

Darrell, T.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
[Crossref]

de Sisternes, L.

Deehr, C. S.

Divvala, S.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
[Crossref]

Dollár, P.

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

Donahue, J.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
[Crossref]

Douze, M.

H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in Proc. Eur. Conf. Comput. Vis. (2008), pp. 304–317.
[Crossref]

H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
[Crossref]

Dyrland, M.

Eather, R. H.

Erhan, D.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Esbensen, K.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
[Crossref]

Farhadi, A.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
[Crossref]

J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), 6517–6525.

Fu, C. Y.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Gao, X.

X. Yang, X. Gao, and Q. Tian, “Polar embedding for aurora image retrieval,” IEEE Trans. Image Process. 24(11), 3332–3344 (2015).
[Crossref] [PubMed]

Geiger, M.

T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

Geladi, P.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
[Crossref]

Gevers, T.

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

Girshick, R.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
[Crossref]

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
[Crossref]

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

Goenka, C.

Gong, Y.

Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
[Crossref]

Guo, R.

Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
[Crossref]

Gwun, O.

L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” Int. J. Image Process. 3(4), 143–152 (2009).

Hampton, D.

Hariharan, B.

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

He, K.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.

Heia, K.

Hirsch, M.

Holmes, J. M.

Isard, M.

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

Jegou, H.

H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
[Crossref]

H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in Proc. Eur. Conf. Comput. Vis. (2008), pp. 304–317.
[Crossref]

G. Tolias, R. Sicre, and H. Jegou, “Particular object retrieval with integral maxpooling of CNN activations,” in Proc. Int. Conf. Learn. Represent. (2016), pp. 1–12.

Juan, L.

L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” Int. J. Image Process. 3(4), 143–152 (2009).

Kapali, S.

Koehler, J.

T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

Lazebnik, S.

Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
[Crossref]

Leng, T.

Lin, T.

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

Liu, W.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Lorentzen, D. A.

Maenpaa, T.

T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
[Crossref]

Malik, J.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
[Crossref]

Marshall, R.

Mende, S. B.

Migliozzi, M.

Mikolajczyk, K.

K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004).
[Crossref]

Nister, D.

D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), 2161–2168.

Noto, J.

Ojala, T.

T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
[Crossref]

Pérez, P.

H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
[Crossref]

Philbin, J.

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

Pietikainen, M.

T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
[Crossref]

Redmon, J.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
[Crossref]

J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), 6517–6525.

Reed, S.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Ren, S.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.

Riccobono, J.

Rubin, D. L.

Schmid, C.

K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004).
[Crossref]

H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in Proc. Eur. Conf. Comput. Vis. (2008), pp. 304–317.
[Crossref]

H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
[Crossref]

Semeter, J.

Sicre, R.

G. Tolias, R. Sicre, and H. Jegou, “Particular object retrieval with integral maxpooling of CNN activations,” in Proc. Int. Conf. Learn. Represent. (2016), pp. 1–12.

Sigernes, F.

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent. (2015).

Sivic, J.

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

Smeulders, A. W.

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

Stewenius, H.

D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), 2161–2168.

Sun, J.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.

Svenøe, T.

Szegedy, C.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

Tian, Q.

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

X. Yang, X. Gao, and Q. Tian, “Polar embedding for aurora image retrieval,” IEEE Trans. Image Process. 24(11), 3332–3344 (2015).
[Crossref] [PubMed]

Tolias, G.

G. Tolias, R. Sicre, and H. Jegou, “Particular object retrieval with integral maxpooling of CNN activations,” in Proc. Int. Conf. Learn. Represent. (2016), pp. 1–12.

Uijlings, J. R.

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

Van De Sande, K. E.

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

Wang, J.

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

Wang, L.

Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
[Crossref]

Wang, S.

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

Welling, M.

T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

Wold, S.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
[Crossref]

Wu, M.

Yang, X.

X. Yang, X. Gao, and Q. Tian, “Polar embedding for aurora image retrieval,” IEEE Trans. Image Process. 24(11), 3332–3344 (2015).
[Crossref] [PubMed]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.

Zheng, L.

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

Zisserman, A.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent. (2015).

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

Appl. Opt. (1)

Chemom. Intell. Lab. Syst. (1)

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987).
[Crossref]

IEEE Trans. Image Process. (1)

X. Yang, X. Gao, and Q. Tian, “Polar embedding for aurora image retrieval,” IEEE Trans. Image Process. 24(11), 3332–3344 (2015).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002).
[Crossref]

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017).
[Crossref] [PubMed]

Int. J. Comput. Vis. (3)

L. Zheng, S. Wang, J. Wang, and Q. Tian, “Accurate image search with multi-scale contextual evidences,” Int. J. Comput. Vis. 120(1), 1–13 (2016).
[Crossref]

J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis. 104(2), 154–171 (2013).
[Crossref]

K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004).
[Crossref]

Int. J. Image Process. (1)

L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,” Int. J. Image Process. 3(4), 143–152 (2009).

Opt. Express (3)

Other (24)

J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” in Proc. IEEE Int. Conf. Comput. Vis. (2003), pp. 1470–1477.

K. He, G. Gkioxari, P. Doll’ar, and R. Girshick, “Mask R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (2017), pp. 2980–2988.

S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), pp. 2169–2178.
[Crossref]

D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2006), 2161–2168.

J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, ” Object retrieval with large vocabularies and fast spatial matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2007), pp. 1–8.
[Crossref]

H. Jegou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2010), pp. 3304–3311.
[Crossref]

H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” in Proc. Eur. Conf. Comput. Vis. (2008), pp. 304–317.
[Crossref]

Y. Xia, K. He, F. Wen, and J. Sun, “Joint inverted indexing,” in Proc. IEEE Int. Conf. Comput. Vis. (2013), pp. 3416–3423.

L. Zheng, S. Wang, W. Zhou, and Q. Tian, “Bayes merging of multiple vocabularies for scalable image retrieval,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 1963–1970.
[Crossref]

W. Zhou, H. Li, J. Sun, and Q. Tian, “Collaborative index embedding for image retrieval,” IEEE Trans. Patt. Anal. Mach. Intell. 99, 1 (2017).
[Crossref]

L. Zheng, Y. Yang, and Q. Tian, “SIFT meets CNN: A decade survey of instance retrieval,” IEEE Trans. Patt. Anal. Mach. Intell. 99, 1 (2017).
[Crossref]

T. Cohen, M. Geiger, and M. Welling, “Convolutional networks for spherical signals,” arXiv preprint arXiv:1709.04893 (2017)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Adv. Neural Inform. Process. Syst. (2012), pp. 1097–1105.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learn. Represent. (2015).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 770–778.

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 5987–5995.
[Crossref]

Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comput. Vis. (2014), pp. 392–407.
[Crossref]

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014), pp. 580–587.
[Crossref]

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2016), pp. 779–788.
[Crossref]

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (2016), pp. 21–37.

J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), 6517–6525.

G. Tolias, R. Sicre, and H. Jegou, “Particular object retrieval with integral maxpooling of CNN activations,” in Proc. Int. Conf. Learn. Represent. (2016), pp. 1–12.

T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2017), pp. 936–944.

T. S. Cohen, M. Geiger, J. Koehler, and M. Welling, “Spherical CNNs,” in Proc. Int. Conf. Learn. Represent. (2018).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 One example of the transformed results. (a) Deformed image with circular vision. (b) Normal image with rectangular vision. The latitude-longitude projection and cylinder compression are applied in the transformation from deformed image to normal image.
Fig. 2
Fig. 2 Related images of YRS. (a) Location of YRS. (b) Outdoor scene of YRS. (c) Observation hut of ASI systems in YRS.
Fig. 3
Fig. 3 The structure of ASI composing of three parts.
Fig. 4
Fig. 4 Examples of aurora images captured with triple-wavelength ASIs.
Fig. 5
Fig. 5 Geometrical relationship for ASI aurora observation. (a) ASI aurora image under polar coordinate system. (b) Geometrical relationship of YRS and the sky above.
Fig. 6
Fig. 6 Analysis of ASI aurora image.
Fig. 7
Fig. 7 The diagram of the proposed method.
Fig. 8
Fig. 8 Deformed region detection. (a) IoU score calculation. (b) Changes of average IoU score with the increase of clusters number. (c) Our 6 region priors.
Fig. 9
Fig. 9 Sample results of aurora vortex search using different methods.

Tables (2)

Tables Icon

Table 1 Implementation details of all comparison methods.

Tables Icon

Table 2 Comparison of mAPs (%) and average query times (s) using different methods.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

θ = 90 ° × r / R ,
β = π α = π ( α + α ) = π ( π θ + α ) = θ α ,
R E sin α = R E + h sin ( π θ ) = R E + h sin ( θ ) ,
β = θ α = θ sin 1 ( sin α ) = θ sin 1 [ R E R E + h sin ( θ ) ] .
d = ( R E + h ) β .
IoU = ( A g , A p ) ( A g , A p ) ,
m ( f q , f j ) = { exp ( e 2 σ 2 ) , i f q ( f q ) = q ( f j ) , e < T 0 , o t h e r w i s e ,
S S ( Q , I ) = f q Q , f j I m ( f q , f j ) i d f 2 I 2 .

Metrics