Skip to main content
Top
Published in: Cognitive Computation 4/2016

01-08-2016

A Novel Saliency Prediction Method Based on Fast Radial Symmetry Transform and Its Generalization

Authors: Jiayu Liang, Shiu Yin Yuen

Published in: Cognitive Computation | Issue 4/2016

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Symmetry has been observed as an important indicator of visual attention. In this paper, we propose a novel saliency prediction method based on fast radial symmetry transform (FRST) and its generalization (GFRST). We made two contributions. First, a novel saliency predictor based on FRST is proposed. The new approach does not require a whole set of visual features (intensity, color, orientation) as in most previous works but uses only symmetry and center bias to model human fixations at the behavioral level. The new model is shown to have higher prediction accuracy and lower computational complexity than an existing saliency prediction method based on symmetry. Second, we propose using GFRST for predicting visual attention. GFRST is shown to outperform FRST, as it can detect symmetries distorted by parallel projection.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference López MT, Fernández-Caballero A, Fernández MA, Mira J, Delgado AE. Visual surveillance by dynamic visual attention method. Pattern Reconit. 2006;39:2194–211.CrossRef López MT, Fernández-Caballero A, Fernández MA, Mira J, Delgado AE. Visual surveillance by dynamic visual attention method. Pattern Reconit. 2006;39:2194–211.CrossRef
2.
go back to reference Begum M, Karray F. Visual attention for robotic cognition: a survey. IEEE Trans Auton Ment Dev. 2011;3(1):92–105.CrossRef Begum M, Karray F. Visual attention for robotic cognition: a survey. IEEE Trans Auton Ment Dev. 2011;3(1):92–105.CrossRef
3.
go back to reference Harding P, Robertson NM. Visual saliency from image features with application to compression. Cognit Comput. 2013;5(1):76–98.CrossRef Harding P, Robertson NM. Visual saliency from image features with application to compression. Cognit Comput. 2013;5(1):76–98.CrossRef
4.
go back to reference Li Z, Qin S, Itti L. Visual attention guided bit allocation in video compression. Image Vis Comput. 2011;29:1–14.CrossRef Li Z, Qin S, Itti L. Visual attention guided bit allocation in video compression. Image Vis Comput. 2011;29:1–14.CrossRef
5.
go back to reference Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.CrossRef Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.CrossRef
6.
go back to reference Le Meur O, Le Callet P, Barba D, Thoreau D. A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell. 2006;28(5):802–17.CrossRefPubMed Le Meur O, Le Callet P, Barba D, Thoreau D. A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell. 2006;28(5):802–17.CrossRefPubMed
7.
go back to reference Liang J, Yuen SY. Edge detection with automatic scale selection approach to improve coherent visual attention model. In: IAPR international conference on machine vision applications; 2013. Liang J, Yuen SY. Edge detection with automatic scale selection approach to improve coherent visual attention model. In: IAPR international conference on machine vision applications; 2013.
8.
9.
go back to reference Reisfeld D, Wolfson H, Yeshurun Y. Context-free attentional operators: the generalized symmetry transform. Int J Comput Vis. 1995;14:119–30.CrossRef Reisfeld D, Wolfson H, Yeshurun Y. Context-free attentional operators: the generalized symmetry transform. Int J Comput Vis. 1995;14:119–30.CrossRef
10.
go back to reference Heidemann G. Focus-of-attention from local color symmetries. IEEE Trans Pattern Anal Mach Intell. 2004;26(7):817–30.CrossRefPubMed Heidemann G. Focus-of-attention from local color symmetries. IEEE Trans Pattern Anal Mach Intell. 2004;26(7):817–30.CrossRefPubMed
11.
go back to reference Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: Proceedings of international conference on computer vision; 2009. Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: Proceedings of international conference on computer vision; 2009.
12.
go back to reference Zhang J, Sclaroff S. Saliency detection: a boolean map approach. In: IEEE international conference on computer vision (ICCV); 2013. p. 153–60. Zhang J, Sclaroff S. Saliency detection: a boolean map approach. In: IEEE international conference on computer vision (ICCV); 2013. p. 153–60.
13.
14.
go back to reference Tünnermann J, Mertsching B. Region-based artificial visual attention in space and time. Cognit Comput. 2014;6(1):125–43.CrossRef Tünnermann J, Mertsching B. Region-based artificial visual attention in space and time. Cognit Comput. 2014;6(1):125–43.CrossRef
15.
go back to reference Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis. 2013;13(4):11.CrossRefPubMed Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis. 2013;13(4):11.CrossRefPubMed
16.
go back to reference Marat S, Rahman A, Pellerin D, Guyader N, Houzet D. Improving visual saliency by adding ‘face feature map’ and ‘center bias’. Cognit Comput. 2013;5(1):63–75.CrossRef Marat S, Rahman A, Pellerin D, Guyader N, Houzet D. Improving visual saliency by adding ‘face feature map’ and ‘center bias’. Cognit Comput. 2013;5(1):63–75.CrossRef
17.
go back to reference Cerf M, Harel J, Einhauser W, Koch C. Predicting human gaze using low-level saliency combined with face detection. In: Platt JC, Koller D, Singer Y, Roweis ST, editors. Advances in neural information processing systems. MIT Press; 2007. Cerf M, Harel J, Einhauser W, Koch C. Predicting human gaze using low-level saliency combined with face detection. In: Platt JC, Koller D, Singer Y, Roweis ST, editors. Advances in neural information processing systems. MIT Press; 2007.
18.
go back to reference Zhao J, Sun S, Liu X, Sun J, Yang A. A novel biologically inspired visual saliency model. Cognit Comput. 2014;6(4):841–8.CrossRef Zhao J, Sun S, Liu X, Sun J, Yang A. A novel biologically inspired visual saliency model. Cognit Comput. 2014;6(4):841–8.CrossRef
19.
go back to reference Hershler O, Hochstein S. At first sight: a high-level pop out effect for faces. Vis Res. 2005;45(13):1707–24.CrossRefPubMed Hershler O, Hochstein S. At first sight: a high-level pop out effect for faces. Vis Res. 2005;45(13):1707–24.CrossRefPubMed
20.
go back to reference Van Rullen R. On second glance: still no high-level pop-out effect for faces. Vis Res. 2006;46(18):3017–27.CrossRef Van Rullen R. On second glance: still no high-level pop-out effect for faces. Vis Res. 2006;46(18):3017–27.CrossRef
21.
go back to reference Palmer SE, Hemenway K. Orientation and symmetry: effects of multiple, rotational, and near symmetries. J Exp Psychol Hum Percept Perform. 1978;4(4):691–702.CrossRefPubMed Palmer SE, Hemenway K. Orientation and symmetry: effects of multiple, rotational, and near symmetries. J Exp Psychol Hum Percept Perform. 1978;4(4):691–702.CrossRefPubMed
22.
go back to reference Kaufman L, Richards W. Spontaneous fixation tendencies for visual forms. Percept Psychophys. 1969;5(2):85–8.CrossRef Kaufman L, Richards W. Spontaneous fixation tendencies for visual forms. Percept Psychophys. 1969;5(2):85–8.CrossRef
23.
24.
go back to reference Orabona F, Metta G, Sandini G. A proto-object based visual attention model. Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. Berlin: Springer; 2008. Orabona F, Metta G, Sandini G. A proto-object based visual attention model. Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. Berlin: Springer; 2008.
25.
go back to reference Sun Y. Hierarchical object-based visual attention for machine vision. Ph.D. Thesis. School of Informatics, University of Edinburgh; 2003. Sun Y. Hierarchical object-based visual attention for machine vision. Ph.D. Thesis. School of Informatics, University of Edinburgh; 2003.
26.
go back to reference Bindemann M, Scheepers C, Burton AM. Viewpoint and center of gravity affect eye movements to human faces. J Vis. 2009;9(2):7.CrossRefPubMed Bindemann M, Scheepers C, Burton AM. Viewpoint and center of gravity affect eye movements to human faces. J Vis. 2009;9(2):7.CrossRefPubMed
27.
go back to reference Coren S, Hoenig P. Effect of non-target stimuli upon length of voluntary saccades. Percept Mot Skills. 1972;34(2):499–508.CrossRefPubMed Coren S, Hoenig P. Effect of non-target stimuli upon length of voluntary saccades. Percept Mot Skills. 1972;34(2):499–508.CrossRefPubMed
28.
go back to reference Findlay JM. Local and global influences on saccadic eye movements. In: Fisher DE, Monty RA, Senders JW, editors. Eye movements: cognition and visual perception. Hillsdale: Lawrence Erlbaum; 1981. Findlay JM. Local and global influences on saccadic eye movements. In: Fisher DE, Monty RA, Senders JW, editors. Eye movements: cognition and visual perception. Hillsdale: Lawrence Erlbaum; 1981.
29.
30.
31.
go back to reference He PY, Kowler E. The role of location probability in the programming of saccades: implications for “center-of-gravity” tendencies. Vis Res. 1989;29(9):1165–81.CrossRefPubMed He PY, Kowler E. The role of location probability in the programming of saccades: implications for “center-of-gravity” tendencies. Vis Res. 1989;29(9):1165–81.CrossRefPubMed
32.
go back to reference Harel J, Koch C, Perona P. Graph-based visual saliency. In: Advances in neural information processing systems; 2006. p. 545–52. Harel J, Koch C, Perona P. Graph-based visual saliency. In: Advances in neural information processing systems; 2006. p. 545–52.
33.
go back to reference Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(10):1915–26.CrossRefPubMed Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(10):1915–26.CrossRefPubMed
34.
go back to reference Bruce NDB, Tsotsos JK. Saliency based on information maximization. Adv Neural Inf Process Syst. 2006;18:155–62. Bruce NDB, Tsotsos JK. Saliency based on information maximization. Adv Neural Inf Process Syst. 2006;18:155–62.
35.
go back to reference Rahtu E, Kannala J, Salo M, Heikkilä J. Segmenting salient objects from images and videos. In: Computer Vision–ECCV 2010. Springer, Berlin, Heidelberg; 2010. p. 366–79. Rahtu E, Kannala J, Salo M, Heikkilä J. Segmenting salient objects from images and videos. In: Computer Vision–ECCV 2010. Springer, Berlin, Heidelberg; 2010. p. 366–79.
36.
go back to reference Zhang L, Tong M, Marks T, Shan H, Cottrell G. SUN: a Bayesian framework for saliency using natural statistics. J Vis. 2008;8(7):32.CrossRefPubMed Zhang L, Tong M, Marks T, Shan H, Cottrell G. SUN: a Bayesian framework for saliency using natural statistics. J Vis. 2008;8(7):32.CrossRefPubMed
37.
go back to reference Hou X, Zhang L. Saliency detection: a spectral residual approach. In: IEEE conference on computer vision and pattern recognition (CVPR); 2007. Hou X, Zhang L. Saliency detection: a spectral residual approach. In: IEEE conference on computer vision and pattern recognition (CVPR); 2007.
38.
go back to reference Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. p. 1597–604. Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. p. 1597–604.
39.
go back to reference Yuen SY. Shape from contour using symmetries. Lect Notes Comput Sci. 1990;427:437–53.CrossRef Yuen SY. Shape from contour using symmetries. Lect Notes Comput Sci. 1990;427:437–53.CrossRef
40.
go back to reference Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 2003;25(8):959–73.CrossRef Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 2003;25(8):959–73.CrossRef
41.
go back to reference Ni J, Singh MK, Bahlmann C. Fast radial symmetry detection under affine transformations. In: Mortensen E, editor. Computer vision and pattern recognition (CVPR); 2012. Ni J, Singh MK, Bahlmann C. Fast radial symmetry detection under affine transformations. In: Mortensen E, editor. Computer vision and pattern recognition (CVPR); 2012.
42.
go back to reference Le Meur O, Castellan X, Le Callet P, Barba D. Efficient saliency-based repurposing method. In: IEEE international conference on image processing; 2006. p. 421–24. Le Meur O, Castellan X, Le Callet P, Barba D. Efficient saliency-based repurposing method. In: IEEE international conference on image processing; 2006. p. 421–24.
43.
go back to reference Loy G. Computer vision to see people: a basis for enhanced human computer interaction. Ph.D. thesis, Department of Systems Engineering, Aust Natl Univ; 2003. Loy G. Computer vision to see people: a basis for enhanced human computer interaction. Ph.D. thesis, Department of Systems Engineering, Aust Natl Univ; 2003.
44.
go back to reference Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):185–207.CrossRefPubMed Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):185–207.CrossRefPubMed
45.
go back to reference Gao D, Vasconcelos N. Bottom-up saliency is a discriminant process. In: IEEE 11th international conference on computer vision (ICCV); 2007. Gao D, Vasconcelos N. Bottom-up saliency is a discriminant process. In: IEEE 11th international conference on computer vision (ICCV); 2007.
46.
go back to reference Kienzle W, Wichmann FA, Franz MO, Schölkopf B. A nonparametric approach to bottom-up visual saliency. In: Advances in neural information processing systems; 2006. p. 689–96. Kienzle W, Wichmann FA, Franz MO, Schölkopf B. A nonparametric approach to bottom-up visual saliency. In: Advances in neural information processing systems; 2006. p. 689–96.
47.
go back to reference Le Meur O, Baccino T. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behav Res Methods. 2012;45(1):251–66.CrossRef Le Meur O, Baccino T. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behav Res Methods. 2012;45(1):251–66.CrossRef
48.
go back to reference Zhao Q, Koch C. Learning a saliency map using fixated locations in natural scenes. J Vis. 2011;11(3):9.CrossRefPubMed Zhao Q, Koch C. Learning a saliency map using fixated locations in natural scenes. J Vis. 2011;11(3):9.CrossRefPubMed
49.
go back to reference Li J, Levine MD, An X, Xu X, He H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell. 2013;35(4):996–1010.CrossRefPubMed Li J, Levine MD, An X, Xu X, He H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell. 2013;35(4):996–1010.CrossRefPubMed
50.
go back to reference Borji A, Sihite DN, Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process. 2013;22(1):55–69.CrossRefPubMed Borji A, Sihite DN, Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process. 2013;22(1):55–69.CrossRefPubMed
51.
go back to reference Sun C. Fast stereo matching using rectangular subregioning and 3D Maximum-surface techniques. Int J Comput Vis. 2002;47:99–117.CrossRef Sun C. Fast stereo matching using rectangular subregioning and 3D Maximum-surface techniques. Int J Comput Vis. 2002;47:99–117.CrossRef
52.
go back to reference Zitova B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977–1000.CrossRef Zitova B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977–1000.CrossRef
53.
go back to reference Liang J, Yuen SY. An edge detection with automatic scale selection approach to improve coherent visual attention model. Pattern Recognit Lett. 2013;34(13):1519–24.CrossRef Liang J, Yuen SY. An edge detection with automatic scale selection approach to improve coherent visual attention model. Pattern Recognit Lett. 2013;34(13):1519–24.CrossRef
54.
go back to reference Ouerhani N, Von Wartburg R, Hügli H, Müri R. Empirical validation of the saliency-based model of visual attention. Electro Lett Comput Vis Image Anal. 2004;3(1):13–24. Ouerhani N, Von Wartburg R, Hügli H, Müri R. Empirical validation of the saliency-based model of visual attention. Electro Lett Comput Vis Image Anal. 2004;3(1):13–24.
55.
go back to reference Le Meur O, Le Callet P, Barba D. Predicting visual fixations on video based on low-level visual features. Vis Res. 2007;47(19):2483–98.CrossRefPubMed Le Meur O, Le Callet P, Barba D. Predicting visual fixations on video based on low-level visual features. Vis Res. 2007;47(19):2483–98.CrossRefPubMed
56.
go back to reference Mancas M. Computational attention modelisation and application to audio and image processing. Ph.D. thesis ; 2007. Mancas M. Computational attention modelisation and application to audio and image processing. Ph.D. thesis ; 2007.
57.
go back to reference Rajashekar U, Van Der Linde I, Bovik AC, Cormack LK. GAFFE: a gaze-attentive fixation finding engine. IEEE Trans Image Process. 2008;17(4):564–73.CrossRefPubMed Rajashekar U, Van Der Linde I, Bovik AC, Cormack LK. GAFFE: a gaze-attentive fixation finding engine. IEEE Trans Image Process. 2008;17(4):564–73.CrossRefPubMed
58.
go back to reference Pele O, Werman M. Fast and robust earth mover’s distances. In: IEEE 12th international conference on computer vision; 2009. p. 460–67. Pele O, Werman M. Fast and robust earth mover’s distances. In: IEEE 12th international conference on computer vision; 2009. p. 460–67.
59.
go back to reference Judd T, Durand F, Torralba A. A benchmark of computational models of saliency to predict human fixations. MIT technical report; 2012 Judd T, Durand F, Torralba A. A benchmark of computational models of saliency to predict human fixations. MIT technical report; 2012
60.
go back to reference Riche N, Duvinage M, Mancas M, Gosselin B, Dutoit T. Saliency and human fixations: state-of-the-art and study of comparison metrics. In: IEEE international conference on computer vision (ICCV); 2013. Riche N, Duvinage M, Mancas M, Gosselin B, Dutoit T. Saliency and human fixations: state-of-the-art and study of comparison metrics. In: IEEE international conference on computer vision (ICCV); 2013.
Metadata
Title
A Novel Saliency Prediction Method Based on Fast Radial Symmetry Transform and Its Generalization
Authors
Jiayu Liang
Shiu Yin Yuen
Publication date
01-08-2016
Publisher
Springer US
Published in
Cognitive Computation / Issue 4/2016
Print ISSN: 1866-9956
Electronic ISSN: 1866-9964
DOI
https://doi.org/10.1007/s12559-016-9406-8

Other articles of this Issue 4/2016

Cognitive Computation 4/2016 Go to the issue

Premium Partner