Skip to main content
Top
Published in: International Journal of Computer Vision 6-7/2019

22-03-2019

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

Authors: Gaurav Goswami, Akshay Agarwal, Nalini Ratha, Richa Singh, Mayank Vatsa

Published in: International Journal of Computer Vision | Issue 6-7/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks, (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, and three publicly available face databases demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. We also evaluate the proposed approaches on four existing quasi-imperceptible distortions: DeepFool, Universal adversarial perturbations, \(l_2\), and Elastic-Net (EAD). The proposed method is able to detect both types of attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
1
A shorter version of the manuscript was presented at AAAI2018.
 
2
The algorithms proposed by Metzen et al. (2017) and Lu et al. (2017) have also used network responses for detecting adversarial attacks. As mentioned in Sect. 2, for real and adversarial examples, SafetyNet (Lu et al. 2017) hypothesize that the ReLU activation at the final stage of CNN follows different distributions. Based on this assumption they have discretized the ReLU maps and append an RBF SVM in the target model for adversarial examples detection. On the other hand, Metzen et al. (2017) have trained the neural network on the features of internal layers of CNN.
 
3
Detection accuracies are reported at equal error rate (EER).
 
Literature
go back to reference Addad, B., Kodjabashian, J., & Meyer, C. (2018). Clipping free attacks against artificial neural networks. arXiv preprint arXiv:1803.09468. Addad, B., Kodjabashian, J., & Meyer, C. (2018). Clipping free attacks against artificial neural networks. arXiv preprint arXiv:​1803.​09468.
go back to reference Agarwal, A., Singh, R., & Vatsa, M. (2016). Face anti-spoofing using haralick features. In 2016 IEEE 8th international conference on biometrics theory, applications and systems (pp. 1–6). Agarwal, A., Singh, R., & Vatsa, M. (2016). Face anti-spoofing using haralick features. In 2016 IEEE 8th international conference on biometrics theory, applications and systems (pp. 1–6).
go back to reference Agarwal, A., Singh, R., Vatsa, M., & Ratha, N. (2018). Are image-agnostic universal adversarial perturbations for face recognition difficult to detect? In IEEE international conference on biometrics: Theory, applications, and systems. Agarwal, A., Singh, R., Vatsa, M., & Ratha, N. (2018). Are image-agnostic universal adversarial perturbations for face recognition difficult to detect? In IEEE international conference on biometrics: Theory, applications, and systems.
go back to reference Agarwal, A., Yadav, D., Kohli, N., Singh, R., Vatsa, M., & Noore, A. (2017b). Face presentation attack with latex masks in multispectral videos. In IEEE conference on computer vision and pattern recognition workshops (pp. 275–283). Agarwal, A., Yadav, D., Kohli, N., Singh, R., Vatsa, M., & Noore, A. (2017b). Face presentation attack with latex masks in multispectral videos. In IEEE conference on computer vision and pattern recognition workshops (pp. 275–283).
go back to reference Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.CrossRef
go back to reference Alaifari, R., Alberti, G. S., & Gauksson, T. (2018). Adef: An iterative algorithm to construct adversarial deformations. arXiv preprint arXiv:1804.07729. Alaifari, R., Alberti, G. S., & Gauksson, T. (2018). Adef: An iterative algorithm to construct adversarial deformations. arXiv preprint arXiv:​1804.​07729.
go back to reference Athalye, A., & Sutskever, I. (2018). Synthesizing robust adversarial examples. In International conference on machine learning. Athalye, A., & Sutskever, I. (2018). Synthesizing robust adversarial examples. In International conference on machine learning.
go back to reference Bay, H., Tuytelaars, T., & Van Gool, L. (2006). Surf: Speeded up robust features. In European conference on computer vision (pp. 404–417). Bay, H., Tuytelaars, T., & Van Gool, L. (2006). Surf: Speeded up robust features. In European conference on computer vision (pp. 404–417).
go back to reference Beveridge, J., Phillips, P., Bolme, D., Draper, B., Given, G., Lui, Y. M., Teli, M., Zhang, H., Scruggs, W., Bowyer, K., Flynn, P., & Cheng, S. (2013). The challenge of face recognition from digital point-and-shoot cameras. In IEEE conference on biometrics: Theory, applications and systems Beveridge, J., Phillips, P., Bolme, D., Draper, B., Given, G., Lui, Y. M., Teli, M., Zhang, H., Scruggs, W., Bowyer, K., Flynn, P., & Cheng, S. (2013). The challenge of face recognition from digital point-and-shoot cameras. In IEEE conference on biometrics: Theory, applications and systems
go back to reference Bhagoji, A. N., Cullina, D., & Mittal, P. (2017). Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. arXiv preprint arXiv:1704.02654. Bhagoji, A. N., Cullina, D., & Mittal, P. (2017). Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. arXiv preprint arXiv:​1704.​02654.
go back to reference Bharati, A., Singh, R., Vatsa, M., & Bowyer, K. W. (2016). Detecting facial retouching using supervised deep learning. IEEE Transactions on Information Forensics and Security, 11(9), 1903–1913.CrossRef Bharati, A., Singh, R., Vatsa, M., & Bowyer, K. W. (2016). Detecting facial retouching using supervised deep learning. IEEE Transactions on Information Forensics and Security, 11(9), 1903–1913.CrossRef
go back to reference Biggio, B., Fumera, G., Marcialis, G. L., & Roli, F. (2017). Statistical meta-analysis of presentation attacks for secure multibiometric systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(3), 561–575.CrossRef Biggio, B., Fumera, G., Marcialis, G. L., & Roli, F. (2017). Statistical meta-analysis of presentation attacks for secure multibiometric systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(3), 561–575.CrossRef
go back to reference Boulkenafet, Z., Komulainen, J., & Hadid, A. (2016). Face spoofing detection using colour texture analysis. IEEE Transactions on Information Forensics and Security, 11(8), 1818–1830.CrossRef Boulkenafet, Z., Komulainen, J., & Hadid, A. (2016). Face spoofing detection using colour texture analysis. IEEE Transactions on Information Forensics and Security, 11(8), 1818–1830.CrossRef
go back to reference Boulkenafet, Z., Komulainen, J., & Hadid, A. (2017). Face antispoofing using speeded-up robust features and fisher vector encoding. IEEE Signal Processing Letters, 24(2), 141–145. Boulkenafet, Z., Komulainen, J., & Hadid, A. (2017). Face antispoofing using speeded-up robust features and fisher vector encoding. IEEE Signal Processing Letters, 24(2), 141–145.
go back to reference Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., & Erhan, D. (2016). Domain separation networks. Advances in Neural Information Processing Systems, 29, 343–351. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., & Erhan, D. (2016). Domain separation networks. Advances in Neural Information Processing Systems, 29, 343–351.
go back to reference Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In IEEE symposium on security and privacy (pp. 39–57). Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In IEEE symposium on security and privacy (pp. 39–57).
go back to reference Chen, J., Deng, Y., Bai, G., & Su, G. (2015). Face image quality assessment based on learning to rank. IEEE Signal Processing Letters, 22(1), 90–94.CrossRef Chen, J., Deng, Y., Bai, G., & Su, G. (2015). Face image quality assessment based on learning to rank. IEEE Signal Processing Letters, 22(1), 90–94.CrossRef
go back to reference Chen, P. Y., Sharma, Y., Zhang, H., Yi, J., & Hsieh, C. J. (2018). EAD: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence. Chen, P. Y., Sharma, Y., Zhang, H., Yi, J., & Hsieh, C. J. (2018). EAD: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence.
go back to reference Chhabra, S., Singh, R., Vatsa, M., & Gupta, G. (2018). Anonymizing k-facial attributes via adversarial perturbations. In International joint conferences on artificial intelligence (pp. 656–662). Chhabra, S., Singh, R., Vatsa, M., & Gupta, G. (2018). Anonymizing k-facial attributes via adversarial perturbations. In International joint conferences on artificial intelligence (pp. 656–662).
go back to reference Cisse, M. M., Adi, Y., Neverova, N., & Keshet, J. (2017). Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In Advances in neural information processing systems (pp. 6977–6987). Cisse, M. M., Adi, Y., Neverova, N., & Keshet, J. (2017). Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In Advances in neural information processing systems (pp. 6977–6987).
go back to reference Das, N., Shanbhogue, M., Chen, S. T., Hohman, F., Chen, L., Kounavis, M. E., & Chau, D. H. (2017). Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900. Das, N., Shanbhogue, M., Chen, S. T., Hohman, F., Chen, L., Kounavis, M. E., & Chau, D. H. (2017). Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:​1705.​02900.
go back to reference de Souza, G. B., da Silva Santos, D. F., Pires, R. G., Marana, A. N., & Papa, J. P. (2017). Deep texture features for robust face spoofing detection. IEEE Transactions on Circuits and Systems II: Express Briefs, 64(12), 1397–1401.CrossRef de Souza, G. B., da Silva Santos, D. F., Pires, R. G., Marana, A. N., & Papa, J. P. (2017). Deep texture features for robust face spoofing detection. IEEE Transactions on Circuits and Systems II: Express Briefs, 64(12), 1397–1401.CrossRef
go back to reference Deng, J., Dong, W., Socher, R., Li, L., Li, K., & Li, F.-F. (2009). ImageNet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255). Deng, J., Dong, W., Socher, R., Li, L., Li, K., & Li, F.-F. (2009). ImageNet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255).
go back to reference Dziugaite, G. K., Ghahramani, Z., & Roy, D. M. (2016). A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853. Dziugaite, G. K., Ghahramani, Z., & Roy, D. M. (2016). A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:​1608.​00853.
go back to reference Feinman, R., Curtin, R. R., Shintre, S., & Gardner, A. B. (2017). Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. Feinman, R., Curtin, R. R., Shintre, S., & Gardner, A. B. (2017). Detecting adversarial samples from artifacts. arXiv preprint arXiv:​1703.​00410.
go back to reference Gan, J., Li, S., Zhai, Y., & Liu, C. (2017). 3d convolutional neural network based on face anti-spoofing. In 2017 2nd international conference on multimedia and image processing (ICMIP) (pp. 1–5). Gan, J., Li, S., Zhai, Y., & Liu, C. (2017). 3d convolutional neural network based on face anti-spoofing. In 2017 2nd international conference on multimedia and image processing (ICMIP) (pp. 1–5).
go back to reference Goel, A., Singh, A., Agarwal, A., Vatsa, M., & Singh, R. (2018). Smartbox: Benchmarking adversarial detection and mitigation algorithms for face recognition. In IEEE International conference on biometrics: Theory, applications, and systems Goel, A., Singh, A., Agarwal, A., Vatsa, M., & Singh, R. (2018). Smartbox: Benchmarking adversarial detection and mitigation algorithms for face recognition. In IEEE International conference on biometrics: Theory, applications, and systems
go back to reference Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International conference on learning representations. Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International conference on learning representations.
go back to reference Goswami, G., Ratha, N., Agarwal, A., Singh, R., & Vatsa, M. (2018). Unravelling robustness of deep learning based face recognition against adversarial attacks. In Association for the advancement of artificial intelligence. Goswami, G., Ratha, N., Agarwal, A., Singh, R., & Vatsa, M. (2018). Unravelling robustness of deep learning based face recognition against adversarial attacks. In Association for the advancement of artificial intelligence.
go back to reference Gross, R., Matthews, I., Cohn, J., Kanade, T., & Baker, S. (2010). Multi-PIE. Image and Vision Computing, 28(5), 807–813.CrossRef Gross, R., Matthews, I., Cohn, J., Kanade, T., & Baker, S. (2010). Multi-PIE. Image and Vision Computing, 28(5), 807–813.CrossRef
go back to reference Grosse, K., Manoharan, P., Papernot, N., Backes, M., & McDaniel, P. (2017). On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280. Grosse, K., Manoharan, P., Papernot, N., Backes, M., & McDaniel, P. (2017). On the (statistical) detection of adversarial examples. arXiv preprint arXiv:​1702.​06280.
go back to reference Gu, S., & Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068. Gu, S., & Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:​1412.​5068.
go back to reference Guo, C., Rana, M., Cissé, M., & van der Maaten, L. (2018). Countering adversarial images using input transformations. In International conference on learning representations. Guo, C., Rana, M., Cissé, M., & van der Maaten, L. (2018). Countering adversarial images using input transformations. In International conference on learning representations.
go back to reference Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. Stat, 1050, 9. Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. Stat, 1050, 9.
go back to reference Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07–49, University of Massachusetts, Amherst. Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07–49, University of Massachusetts, Amherst.
go back to reference King, D. E. (2009). Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10, 1755–1758. King, D. E. (2009). Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10, 1755–1758.
go back to reference Laskov, P., & Lippmann, R. (2010). Machine learning in adversarial environments. Machine Learning, 81(2), 115–119.CrossRef Laskov, P., & Lippmann, R. (2010). Machine learning in adversarial environments. Machine Learning, 81(2), 115–119.CrossRef
go back to reference Lee, H., Han, S., & Lee, J. (2017). Generative adversarial trainer: Defense to adversarial perturbations with gan. arXiv preprint arXiv:1705.03387. Lee, H., Han, S., & Lee, J. (2017). Generative adversarial trainer: Defense to adversarial perturbations with gan. arXiv preprint arXiv:​1705.​03387.
go back to reference Li, X., & Li, F. (2017). Adversarial examples detection in deep networks with convolutional filter statistics. In International conference on computer vision. Li, X., & Li, F. (2017). Adversarial examples detection in deep networks with convolutional filter statistics. In International conference on computer vision.
go back to reference Liang, B., Li, H., Su, M., Li, X., Shi, W., & Wang, X. (2017). Detecting adversarial examples in deep networks with adaptive noise reduction. URL arXiv:1705.08378 Liang, B., Li, H., Su, M., Li, X., Shi, W., & Wang, X. (2017). Detecting adversarial examples in deep networks with adaptive noise reduction. URL arXiv:​1705.​08378
go back to reference Liu, J., Deng, Y., Bai, T., & Huang, C. (2015). Targeting ultimate accuracy: Face recognition via deep embedding. URL arXiv:1506.07310. Liu, J., Deng, Y., Bai, T., & Huang, C. (2015). Targeting ultimate accuracy: Face recognition via deep embedding. URL arXiv:​1506.​07310.
go back to reference Liu, L., Liu, B., Huang, H., & Bovik, A. C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8), 856–863. Liu, L., Liu, B., Huang, H., & Bovik, A. C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8), 856–863.
go back to reference Liu, M. Y., & Tuzel, O. (2016). Coupled generative adversarial networks. Advances in Neural Information Processing Systems, 29, 469–477. Liu, M. Y., & Tuzel, O. (2016). Coupled generative adversarial networks. Advances in Neural Information Processing Systems, 29, 469–477.
go back to reference Lu, J., Issaranon, T., & Forsyth, D. (2017). Safetynet: Detecting and rejecting adversarial examples robustly. In IEEE international conference on computer vision (pp. 446–454). Lu, J., Issaranon, T., & Forsyth, D. (2017). Safetynet: Detecting and rejecting adversarial examples robustly. In IEEE international conference on computer vision (pp. 446–454).
go back to reference Luo, Y., Boix, X., Roig, G., Poggio, T., & Zhao, Q. (2015). Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292. Luo, Y., Boix, X., Roig, G., Poggio, T., & Zhao, Q. (2015). Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:​1511.​06292.
go back to reference Majumdar, A., Singh, R., & Vatsa, M. (2017). Face verification via class sparsity based supervised encoding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1273–1280.CrossRef Majumdar, A., Singh, R., & Vatsa, M. (2017). Face verification via class sparsity based supervised encoding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1273–1280.CrossRef
go back to reference Manjani, I., Tariyal, S., Vatsa, M., Singh, R., & Majumdar, A. (2017). Detecting silicone mask-based presentation attack via deep dictionary learning. IEEE Transactions on Information Forensics and Security, 12(7), 1713–1723.CrossRef Manjani, I., Tariyal, S., Vatsa, M., Singh, R., & Majumdar, A. (2017). Detecting silicone mask-based presentation attack via deep dictionary learning. IEEE Transactions on Information Forensics and Security, 12(7), 1713–1723.CrossRef
go back to reference Meng, D., & Chen, H. (2017). Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security (pp. 135–147). Meng, D., & Chen, H. (2017). Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security (pp. 135–147).
go back to reference Metzen, J. H., Genewein, T., Fischer, V., & Bischoff, B. (2017). On detecting adversarial perturbations. In International conference on learning representations. Metzen, J. H., Genewein, T., Fischer, V., & Bischoff, B. (2017). On detecting adversarial perturbations. In International conference on learning representations.
go back to reference Miyato, T., Dai, A. M., & Goodfellow, I. (2017). Adversarial training methods for semi-supervised text classification. In International conference on learning representations. Miyato, T., Dai, A. M., & Goodfellow, I. (2017). Adversarial training methods for semi-supervised text classification. In International conference on learning representations.
go back to reference Moorthy, A. K., & Bovik, A. C. (2010). A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 17(5), 513–516.CrossRef Moorthy, A. K., & Bovik, A. C. (2010). A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 17(5), 513–516.CrossRef
go back to reference Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773). Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765–1773).
go back to reference Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582). Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582).
go back to reference Nayebi, A., & Ganguli, S. (2017). Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:1703.09202. Nayebi, A., & Ganguli, S. (2017). Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:​1703.​09202.
go back to reference Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE conference on computer vision and pattern recognition (pp. 427–436). Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE conference on computer vision and pattern recognition (pp. 427–436).
go back to reference Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the ACM on Asia conference on computer and communications security (pp. 506–519). ACM. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the ACM on Asia conference on computer and communications security (pp. 506–519). ACM.
go back to reference Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016a). The limitations of deep learning in adversarial settings. In IEEE European symposium on security and privacy (pp. 372–387). Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016a). The limitations of deep learning in adversarial settings. In IEEE European symposium on security and privacy (pp. 372–387).
go back to reference Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016b). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (pp. 582–597). Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016b). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (pp. 582–597).
go back to reference Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. In British machine vision conference (vol. 1, p. 6). Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. In British machine vision conference (vol. 1, p. 6).
go back to reference Patel, K., Han, H., Jain, A. K., & Ott, G. (2015). Live face video vs. spoof face video: Use of moire patterns to detect replay video attacks. In 2015 international conference on biometrics (pp. 98–105). Patel, K., Han, H., Jain, A. K., & Ott, G. (2015). Live face video vs. spoof face video: Use of moire patterns to detect replay video attacks. In 2015 international conference on biometrics (pp. 98–105).
go back to reference Phillips, P. J., Flynn, P. J., Beveridge, J. R., Scruggs, W., O’Toole, A. J., Bolme, D., Bowyer, K. W., Draper, B. A., Givens G. H., Lui, Y. M., Sahibzada, H., Scallan, J. A., & Weimer, S. (2009). Overview of the multiple biometrics grand challenge. In Advances in biometrics, (pp. 705–714). Phillips, P. J., Flynn, P. J., Beveridge, J. R., Scruggs, W., O’Toole, A. J., Bolme, D., Bowyer, K. W., Draper, B. A., Givens G. H., Lui, Y. M., Sahibzada, H., Scallan, J. A., & Weimer, S. (2009). Overview of the multiple biometrics grand challenge. In Advances in biometrics, (pp. 705–714).
go back to reference Prakash, A., Moran, N., Garber, S., DiLillo, A., & Storer, J. (2018). Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8571–8580). Prakash, A., Moran, N., Garber, S., DiLillo, A., & Storer, J. (2018). Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8571–8580).
go back to reference Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:​1511.​06434.
go back to reference Raghavendra, R., Venkatesh, S., Raja, K., Cheikh, F., & Busch, C. (2017). On the vulnerability of extended multispectral face recognition systems towards presentation attacks. In IEEE international conference on identity, security and behavior analysis. Raghavendra, R., Venkatesh, S., Raja, K., Cheikh, F., & Busch, C. (2017). On the vulnerability of extended multispectral face recognition systems towards presentation attacks. In IEEE international conference on identity, security and behavior analysis.
go back to reference Rakin, A. S., Yi, J., Gong, B., & Fan, D. (2018). Defend deep neural networks against adversarial examples via fixed anddynamic quantized activation functions. arXiv preprint arXiv:1807.06714. Rakin, A. S., Yi, J., Gong, B., & Fan, D. (2018). Defend deep neural networks against adversarial examples via fixed anddynamic quantized activation functions. arXiv preprint arXiv:​1807.​06714.
go back to reference Ramachandra, R., & Busch, C. (2017). Presentation attack detection methods for face recognition systems: A comprehensive survey. ACM Computing Survey, 50(1), 8:1–8:37. Ramachandra, R., & Busch, C. (2017). Presentation attack detection methods for face recognition systems: A comprehensive survey. ACM Computing Survey, 50(1), 8:1–8:37.
go back to reference Ranjan, R., Sankaranarayanan, S., Castillo, C. D., & Chellappa, R. (2017). Improving network robustness against adversarial attacks with compact convolution. arXiv preprint arXiv:1712.00699. Ranjan, R., Sankaranarayanan, S., Castillo, C. D., & Chellappa, R. (2017). Improving network robustness against adversarial attacks with compact convolution. arXiv preprint arXiv:​1712.​00699.
go back to reference Ratha, N. K., Connell, J. H., & Bolle, R. M. (2001). An analysis of minutiae matching strength. In Audio- and video-based biometric person authentication: Third international conference, proceedings (pp. 223–228). Ratha, N. K., Connell, J. H., & Bolle, R. M. (2001). An analysis of minutiae matching strength. In Audio- and video-based biometric person authentication: Third international conference, proceedings (pp. 223–228).
go back to reference Rauber, J., Brendel, W., & Bethge, M. (2017). Foolbox v0.8.0: A python toolbox to benchmark the robustness of machine learning models. URL arXiv:1707.04131. Rauber, J., Brendel, W., & Bethge, M. (2017). Foolbox v0.8.0: A python toolbox to benchmark the robustness of machine learning models. URL arXiv:​1707.​04131.
go back to reference Ross, A. S., & Doshi-Velez, F. (2018). Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-second AAAI conference on artificial intelligence. Ross, A. S., & Doshi-Velez, F. (2018). Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-second AAAI conference on artificial intelligence.
go back to reference Rozsa, A., Günther, M., Rudd, E. M., & Boult, T. E. (2016). Are facial attributes adversarially robust? In International conference on pattern recognition (pp. 3121–3127). Rozsa, A., Günther, M., Rudd, E. M., & Boult, T. E. (2016). Are facial attributes adversarially robust? In International conference on pattern recognition (pp. 3121–3127).
go back to reference Rudd, E. M., Gunther, M., & Boult, T. E. (2016). Paraph: Presentation attack rejection by analyzing polarization hypotheses. In The IEEE conference on computer vision and pattern recognition workshops. Rudd, E. M., Gunther, M., & Boult, T. E. (2016). Paraph: Presentation attack rejection by analyzing polarization hypotheses. In The IEEE conference on computer vision and pattern recognition workshops.
go back to reference Sabour, S., Cao, Y., Faghri, F., & Fleet, D. J. (2016). Adversarial manipulation of deep representations. In International conference on learning representations. Sabour, S., Cao, Y., Faghri, F., & Fleet, D. J. (2016). Adversarial manipulation of deep representations. In International conference on learning representations.
go back to reference Samangouei, P., Kabkab, M., & Chellappa, R. (2018). Defense-gan: Protecting classifiers against adversarial attacks using generative models. In International conference on learning representations. Samangouei, P., Kabkab, M., & Chellappa, R. (2018). Defense-gan: Protecting classifiers against adversarial attacks using generative models. In International conference on learning representations.
go back to reference Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823). Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823).
go back to reference Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In ACM SIGSAC conference on computer and communications security (pp. 1528–1540). Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In ACM SIGSAC conference on computer and communications security (pp. 1528–1540).
go back to reference Siddiqui, T. A., Bharadwaj, S., Dhamecha, T. I., Agarwal, A., Vatsa, M., Singh, R., & Ratha, N. (2016). Face anti-spoofing with multifeature videolet aggregation. In IEEE international conference on pattern recognition (pp. 1035–1040). Siddiqui, T. A., Bharadwaj, S., Dhamecha, T. I., Agarwal, A., Vatsa, M., Singh, R., & Ratha, N. (2016). Face anti-spoofing with multifeature videolet aggregation. In IEEE international conference on pattern recognition (pp. 1035–1040).
go back to reference Smith, D. F., Wiliem, A., & Lovell, B. C. (2015). Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security, 10(4), 736–745.CrossRef Smith, D. F., Wiliem, A., & Lovell, B. C. (2015). Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security, 10(4), 736–745.CrossRef
go back to reference Song, Y., Kim, T., Nowozin, S., Ermon, S., & Kushman, N. (2018). Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International conference on learning representations. Song, Y., Kim, T., Nowozin, S., Ermon, S., & Kushman, N. (2018). Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International conference on learning representations.
go back to reference Sun, Y., Wang, X., & Tang, X. (2015). Deeply learned face representations are sparse, selective, and robust. In The IEEE conference on computer vision and pattern recognition. Sun, Y., Wang, X., & Tang, X. (2015). Deeply learned face representations are sparse, selective, and robust. In The IEEE conference on computer vision and pattern recognition.
go back to reference Suykens, J. A., & Vandewalle, J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9(3), 293–300.CrossRef Suykens, J. A., & Vandewalle, J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9(3), 293–300.CrossRef
go back to reference Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2014). Intriguing properties of neural networks. In International conference on learning representations. URL arXiv:1312.6199. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2014). Intriguing properties of neural networks. In International conference on learning representations. URL arXiv:​1312.​6199.
go back to reference Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In IEEE conference on computer vision and pattern recognition (pp. 1701 – 1708). Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In IEEE conference on computer vision and pattern recognition (pp. 1701 – 1708).
go back to reference Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2018). Ensemble adversarial training: Attacks and defenses. In International conference on learning representations. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2018). Ensemble adversarial training: Attacks and defenses. In International conference on learning representations.
go back to reference Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.CrossRef Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.CrossRef
go back to reference Wu, X., He, R., Sun, Z., & Tan, T. (2018). A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security, 13(11), 2884–2896.CrossRef Wu, X., He, R., Sun, Z., & Tan, T. (2018). A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security, 13(11), 2884–2896.CrossRef
go back to reference Xie, C., Wang, J., Zhang, Z., Ren, Z., & Yuille, A. (2018). Mitigating adversarial effects through randomization. In International conference on learning representations. Xie, C., Wang, J., Zhang, Z., Ren, Z., & Yuille, A. (2018). Mitigating adversarial effects through randomization. In International conference on learning representations.
go back to reference Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., & Yuille, A. (2017). Adversarial examples for semantic segmentation and object detection. In IEEE international conference on computer vision. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., & Yuille, A. (2017). Adversarial examples for semantic segmentation and object detection. In IEEE international conference on computer vision.
go back to reference Xu, W., Evans, D., & Qi, Y. (2018). Feature squeezing: Detecting adversarial examples in deep neural networks. In Network and distributed system security symposium. Xu, W., Evans, D., & Qi, Y. (2018). Feature squeezing: Detecting adversarial examples in deep neural networks. In Network and distributed system security symposium.
go back to reference Ye, S., Wang, S., Wang, X., Yuan, B., Wen, W., & Lin, X. (2018). Defending DNN adversarial attacks with pruning and logits augmentation. In International conference on learning representations workshop. Ye, S., Wang, S., Wang, X., Yuan, B., Wen, W., & Lin, X. (2018). Defending DNN adversarial attacks with pruning and logits augmentation. In International conference on learning representations workshop.
Metadata
Title
Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition
Authors
Gaurav Goswami
Akshay Agarwal
Nalini Ratha
Richa Singh
Mayank Vatsa
Publication date
22-03-2019
Publisher
Springer US
Published in
International Journal of Computer Vision / Issue 6-7/2019
Print ISSN: 0920-5691
Electronic ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-019-01160-w

Other articles of this Issue 6-7/2019

International Journal of Computer Vision 6-7/2019 Go to the issue

Premium Partner