Skip to main content
Top

2018 | OriginalPaper | Chapter

An EEG-Based Image Annotation System

Authors : Viral Parekh, Ramanathan Subramanian, Dipanjan Roy, C. V. Jawahar

Published in: Computer Vision, Pattern Recognition, Image Processing, and Graphics

Publisher: Springer Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The success of deep learning in computer vision has greatly increased the need for annotated image datasets. We propose an EEG (Electroencephalogram)-based image annotation system. While humans can recognize objects in 20–200 ms, the need to manually label images results in a low annotation throughput. Our system employs brain signals captured via a consumer EEG device to achieve an annotation rate of up to 10 images per second. We exploit the P300 event-related potential (ERP) signature to identify target images during a rapid serial visual presentation (RSVP) task. We further perform unsupervised outlier removal to achieve an F1-score of 0.88 on the test set. The proposed system does not depend on category-specific EEG signatures enabling the annotation of any new image category without any model pre-training.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Zhang, S., Huang, J., Huang, Y., Yu, Y., Li, H., Metaxas, D.N.: Automatic image annotation using group sparsity. In: CVPR (2010) Zhang, S., Huang, J., Huang, Y., Yu, Y., Li, H., Metaxas, D.N.: Automatic image annotation using group sparsity. In: CVPR (2010)
3.
go back to reference Yashaswi, V., Jawahar, C.: Exploring SVM for image annotation in presence of confusing labels. In: BMVC (2013) Yashaswi, V., Jawahar, C.: Exploring SVM for image annotation in presence of confusing labels. In: BMVC (2013)
4.
go back to reference Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes (VOC) challenge. IJCV 88(2), 303–338 (2010)CrossRef Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes (VOC) challenge. IJCV 88(2), 303–338 (2010)CrossRef
6.
go back to reference Wang, M., Hua, X.-S.: Active learning in multimedia annotation and retrieval: a survey. In: TIST 2011CrossRef Wang, M., Hua, X.-S.: Active learning in multimedia annotation and retrieval: a survey. In: TIST 2011CrossRef
7.
go back to reference Sychay, G., Chang, E., Goh, K.: Effective image annotation via active learning. In: IEEE International Conference on Multimedia and Expo Proceedings, vol. 1, pp. 209–212 (2002) Sychay, G., Chang, E., Goh, K.: Effective image annotation via active learning. In: IEEE International Conference on Multimedia and Expo Proceedings, vol. 1, pp. 209–212 (2002)
8.
go back to reference Bakliwal, P., Jawahar, C.: Active learning based image annotation. In: NCVPRIPG. IEEE (2015) Bakliwal, P., Jawahar, C.: Active learning based image annotation. In: NCVPRIPG. IEEE (2015)
9.
go back to reference Katti, H., Subramanian, R., Kankanhalli, M., Sebe, N., Chua, T.-S., Ramakrishnan, K.R.: Making computers look the way we look: exploiting visual attention for image understanding. In: ACM International Conference on Multimedia, pp. 667–670 (2010) Katti, H., Subramanian, R., Kankanhalli, M., Sebe, N., Chua, T.-S., Ramakrishnan, K.R.: Making computers look the way we look: exploiting visual attention for image understanding. In: ACM International Conference on Multimedia, pp. 667–670 (2010)
10.
go back to reference Subramanian, R., Shankar, D., Sebe, N., Melcher, D.: Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. J. Vis. 14(3), 1–18 (2014)CrossRef Subramanian, R., Shankar, D., Sebe, N., Melcher, D.: Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. J. Vis. 14(3), 1–18 (2014)CrossRef
11.
12.
go back to reference Keysers, C., Xiao, D., Foldiak, P., Perrett, D.: The speed of sight. J. Cogn. Neurosci. 13, 90–101 (2001)CrossRef Keysers, C., Xiao, D., Foldiak, P., Perrett, D.: The speed of sight. J. Cogn. Neurosci. 13, 90–101 (2001)CrossRef
13.
go back to reference Linden, D.E.: The P300: where in the brain is it produced and what does it tell us? Neuroscientist 11(6), 563–576 (2005)CrossRef Linden, D.E.: The P300: where in the brain is it produced and what does it tell us? Neuroscientist 11(6), 563–576 (2005)CrossRef
14.
go back to reference Mohedano, E., Healy, G., McGuinness, K., Giró-i Nieto, X., OConnor, N.E., Smeaton, A.F.: Improving object segmentation by using EEG signals and rapid serial visual presentation. Multimedia Tools Appl. 74(22), 10137–10159 (2015)CrossRef Mohedano, E., Healy, G., McGuinness, K., Giró-i Nieto, X., OConnor, N.E., Smeaton, A.F.: Improving object segmentation by using EEG signals and rapid serial visual presentation. Multimedia Tools Appl. 74(22), 10137–10159 (2015)CrossRef
15.
go back to reference Pohlmeyer, E.A., Wang, J., Jangraw, D.C., Lou, B., Chang, S.-F., Sajda, P.: Closing the loop in cortically-coupled computer vision: a brain-computer interface for searching image databases. J. Neural Eng. 8(3), 036025 (2011)CrossRef Pohlmeyer, E.A., Wang, J., Jangraw, D.C., Lou, B., Chang, S.-F., Sajda, P.: Closing the loop in cortically-coupled computer vision: a brain-computer interface for searching image databases. J. Neural Eng. 8(3), 036025 (2011)CrossRef
16.
go back to reference Koelstra, S., Mühl, C., Patras, I.: EEG analysis for implicit tagging of video data. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, pp. 1–6. IEEE (2009) Koelstra, S., Mühl, C., Patras, I.: EEG analysis for implicit tagging of video data. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, pp. 1–6. IEEE (2009)
17.
go back to reference Subramanian, R., Wache, J., Abadi, M., Vieriu, R., Winkler, S., Sebe, N.: ASCERTAIN: emotion and personality recognition using commercial sensors. IEEE Trans. Affect. Comput. PP, 1 (2016)CrossRef Subramanian, R., Wache, J., Abadi, M., Vieriu, R., Winkler, S., Sebe, N.: ASCERTAIN: emotion and personality recognition using commercial sensors. IEEE Trans. Affect. Comput. PP, 1 (2016)CrossRef
18.
go back to reference Shukla, A., Gullapuram, S.S., Katti, H., Yadati, K., Kankanhalli, M., Subramanian, R.: Affect recognition in ads with application to computational advertising. In: ACM International Conference on Multimedia (2017) Shukla, A., Gullapuram, S.S., Katti, H., Yadati, K., Kankanhalli, M., Subramanian, R.: Affect recognition in ads with application to computational advertising. In: ACM International Conference on Multimedia (2017)
19.
go back to reference Kapoor, A., Shenoy, P., Tan, D.: Combining brain computer interfaces with vision for object categorization. In: CVPR (2008) Kapoor, A., Shenoy, P., Tan, D.: Combining brain computer interfaces with vision for object categorization. In: CVPR (2008)
20.
go back to reference Spampinato, C., Palazzo, S., Kavasidis, I., Giordano, D., Shah, M., Souly, N.: Deep learning human mind for automated visual classification (2017) Spampinato, C., Palazzo, S., Kavasidis, I., Giordano, D., Shah, M., Souly, N.: Deep learning human mind for automated visual classification (2017)
21.
go back to reference Bilalpur, M., Kia, S.M., Chawla, M., Chua, T., Subramanian, R.: Gender and emotion recognition with implicit user signals. In: International Conference on Multimodal Interaction (2017) Bilalpur, M., Kia, S.M., Chawla, M., Chua, T., Subramanian, R.: Gender and emotion recognition with implicit user signals. In: International Conference on Multimodal Interaction (2017)
22.
go back to reference Picton, T.W., et al.: The p300 wave of the human event-related potential. J. Clin. Neurophysiol. 9, 456–456 (1992)CrossRef Picton, T.W., et al.: The p300 wave of the human event-related potential. J. Clin. Neurophysiol. 9, 456–456 (1992)CrossRef
23.
go back to reference Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGnet: a compact convolutional network for EEG-based brain-computer interfaces. arXiv preprint arXiv:1611.08024 (2016) Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGnet: a compact convolutional network for EEG-based brain-computer interfaces. arXiv preprint arXiv:​1611.​08024 (2016)
24.
go back to reference Clevert, D.-A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUS). arXiv preprint arXiv:1511.07289 (2015) Clevert, D.-A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUS). arXiv preprint arXiv:​1511.​07289 (2015)
26.
go back to reference Paszke, A., Chintala, S., Collobert, R., Kavukcuoglu, K., Farabet, C., Bengio, S., Melvin, I., Weston, J., Mariethoz, J.: Pytorch: tensors and dynamic neural networks in python with strong GPU acceleration, May 2017. https://github.com/pytorch/pytorch Paszke, A., Chintala, S., Collobert, R., Kavukcuoglu, K., Farabet, C., Bengio, S., Melvin, I., Weston, J., Mariethoz, J.: Pytorch: tensors and dynamic neural networks in python with strong GPU acceleration, May 2017. https://​github.​com/​pytorch/​pytorch
27.
go back to reference Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F., Burgard, W., Ball, T.: Deep learning with convolutional neural networks for EEG decoding and visualization. In: Human Brain Mapping, August 2017. https://doi.org/10.1002/hbm.23730 Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F., Burgard, W., Ball, T.: Deep learning with convolutional neural networks for EEG decoding and visualization. In: Human Brain Mapping, August 2017. https://​doi.​org/​10.​1002/​hbm.​23730
28.
go back to reference Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014) Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)
29.
go back to reference Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)
30.
go back to reference Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)MATH Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)MATH
31.
go back to reference Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. Comput. Vis. Image Underst. 106(1), 59–70 (2007)CrossRef Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. Comput. Vis. Image Underst. 106(1), 59–70 (2007)CrossRef
32.
go back to reference Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge 2012 (VOC2012) Results Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge 2012 (VOC2012) Results
33.
go back to reference Polich, J.: Updating P300: an integrative theory of p3a and p3b. Clin. Neurophysiol. 118(10), 2128–2148 (2007)CrossRef Polich, J.: Updating P300: an integrative theory of p3a and p3b. Clin. Neurophysiol. 118(10), 2128–2148 (2007)CrossRef
Metadata
Title
An EEG-Based Image Annotation System
Authors
Viral Parekh
Ramanathan Subramanian
Dipanjan Roy
C. V. Jawahar
Copyright Year
2018
Publisher
Springer Singapore
DOI
https://doi.org/10.1007/978-981-13-0020-2_27

Premium Partner