Skip to main content
Erschienen in: Neural Computing and Applications 14/2024

26.02.2024 | Original Article

Evaluating robustness of support vector machines with the Lagrangian dual approach

verfasst von: Yuting Liu, Hong Gu, Pan Qin

Erschienen in: Neural Computing and Applications | Ausgabe 14/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Adversarial examples bring a considerable security threat to support vector machines (SVMs), especially those used in safety-critical applications. Thus, robustness verification is an essential issue for SVMs, which can provide provable robustness against various adversarial attacks. The evaluation results obtained through robustness verification can provide a security guarantee for the use of SVMs. The existing verification method does not often perform well in verifying SVMs with nonlinear kernels. To this end, we propose a method to improve the verification performance for SVMs with nonlinear kernels. We first formalize the adversarial robustness evaluation of SVMs as an optimization problem with a feedforward neural network representation. Then, the lower bound of the original problem is obtained by solving the Lagrangian dual problem. Finally, the adversarial robustness of SVMs is evaluated concerning the lower bound. We evaluate the adversarial robustness of SVMs with linear and nonlinear kernels on the MNIST and Fashion-MNIST datasets. The experimental results show that our method achieves a higher percentage of provable robustness on the test set compared to the state-of-the-art.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Rais MS, Zouaidia K, Boudour R (2022) Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional q network. Neural Comput Appl 34(18):15981–15996CrossRef Rais MS, Zouaidia K, Boudour R (2022) Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional q network. Neural Comput Appl 34(18):15981–15996CrossRef
2.
Zurück zum Zitat Cui M (2022) Big data medical behavior analysis based on machine learning and wireless sensors. Neural Comput Appl 34(12):9413–9427CrossRef Cui M (2022) Big data medical behavior analysis based on machine learning and wireless sensors. Neural Comput Appl 34(12):9413–9427CrossRef
3.
Zurück zum Zitat Rajadurai H, Gandhi UD (2022) A stacked ensemble learning model for intrusion detection in wireless network. Neural Comput Appl 34:15387–15395CrossRef Rajadurai H, Gandhi UD (2022) A stacked ensemble learning model for intrusion detection in wireless network. Neural Comput Appl 34:15387–15395CrossRef
4.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: Paper presented at the 2nd international conference on learning representations, Banff, AB, Canada, April 14–16 2014 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: Paper presented at the 2nd international conference on learning representations, Banff, AB, Canada, April 14–16 2014
5.
Zurück zum Zitat Dvijotham K, Stanforth R, Gowal S, Mann TA, Kohli P (2018) A dual approach to scalable verification of deep networks. In: Globerson, A., Silva, R. (eds.) Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, vol. 2, pp. 550–559. AUAI Press, Monterey, CA, USA Dvijotham K, Stanforth R, Gowal S, Mann TA, Kohli P (2018) A dual approach to scalable verification of deep networks. In: Globerson, A., Silva, R. (eds.) Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, vol. 2, pp. 550–559. AUAI Press, Monterey, CA, USA
6.
Zurück zum Zitat Xiao Y, Pun C-M (2021) Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations. Inf Sci 571:104–132MathSciNetCrossRef Xiao Y, Pun C-M (2021) Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations. Inf Sci 571:104–132MathSciNetCrossRef
7.
Zurück zum Zitat Kurakin A, Goodfellow I, Bengio S (2019) Adversarial examples in the physical world. In: Paper presented at the 5th international conference on learning representations, Toulon, France, April 24–26 2019 Kurakin A, Goodfellow I, Bengio S (2019) Adversarial examples in the physical world. In: Paper presented at the 5th international conference on learning representations, Toulon, France, April 24–26 2019
8.
Zurück zum Zitat Wang X, YangY. Deng Y, He K (2021) Adversarial training with fast gradient projection method against synonym substitution based text attacks. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35, pp. 13997–14005. AAAI Press, Virtual, Online Wang X, YangY. Deng Y, He K (2021) Adversarial training with fast gradient projection method against synonym substitution based text attacks. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35, pp. 13997–14005. AAAI Press, Virtual, Online
11.
Zurück zum Zitat Zhang S, Gao H, Shu C, Cao X, Zhou Y, He J (2022) Black-box Bayesian adversarial attack with transferable priors. Mach Learn. pp 1–18 Zhang S, Gao H, Shu C, Cao X, Zhou Y, He J (2022) Black-box Bayesian adversarial attack with transferable priors. Mach Learn. pp 1–18
12.
Zurück zum Zitat Chen C, Huang T (2021) Camdar-adv: generating adversarial patches on 3d object. Int J Intell Syst 36(3):1441–1453MathSciNetCrossRef Chen C, Huang T (2021) Camdar-adv: generating adversarial patches on 3d object. Int J Intell Syst 36(3):1441–1453MathSciNetCrossRef
13.
Zurück zum Zitat Wang L, Zhang H, Yi J, Hsieh C-J, Jiang Y (2020) Spanning attack: reinforce black-box attacks with unlabeled data. Mach Learn 109(12):2349–2368MathSciNetCrossRef Wang L, Zhang H, Yi J, Hsieh C-J, Jiang Y (2020) Spanning attack: reinforce black-box attacks with unlabeled data. Mach Learn 109(12):2349–2368MathSciNetCrossRef
14.
Zurück zum Zitat Andriushchenko M, Croce F, Flammarion N, Hein M (2020) Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J. (eds.) Computer Vision–ECCV 2020–16th European Conference, vol. 12368 LNCS, pp. 484–501. Springer, Glasgow, UK Andriushchenko M, Croce F, Flammarion N, Hein M (2020) Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J. (eds.) Computer Vision–ECCV 2020–16th European Conference, vol. 12368 LNCS, pp. 484–501. Springer, Glasgow, UK
15.
Zurück zum Zitat Kim BC, Yu Y, Ro YM (2021) Robust decision-based black-box adversarial attack via coarse-to-fine random search. In: 2021 IEEE International conference on image processing, pp. 3048–3052. IEEE, Anchorage, AK, United states Kim BC, Yu Y, Ro YM (2021) Robust decision-based black-box adversarial attack via coarse-to-fine random search. In: 2021 IEEE International conference on image processing, pp. 3048–3052. IEEE, Anchorage, AK, United states
16.
Zurück zum Zitat Li X-C, Zhang X-Y, Yin F, Liu C-L (2022) Decision-based adversarial attack with frequency Mixup. IEEE Trans Inf Forensics Secur 17:1038–1052CrossRef Li X-C, Zhang X-Y, Yin F, Liu C-L (2022) Decision-based adversarial attack with frequency Mixup. IEEE Trans Inf Forensics Secur 17:1038–1052CrossRef
17.
Zurück zum Zitat Chen J, Jordan MI, Wainwright MJ (2020) HopSkipJumpAttack: A query-efficient decision-based attack. In: Paper presented at the 2020 IEEE symposium on security and privacy, San Francisco, CA, USA, May 18–21 2020 Chen J, Jordan MI, Wainwright MJ (2020) HopSkipJumpAttack: A query-efficient decision-based attack. In: Paper presented at the 2020 IEEE symposium on security and privacy, San Francisco, CA, USA, May 18–21 2020
18.
Zurück zum Zitat Guo C, Frank JS, Weinberger KQ (2020) Low Frequency Adversarial Perturbation. In: Paper presented at the proceedings of the thirty-fifth conference on uncertainty in artificial intelligence, Tel Aviv, Israel, July 22–25 2019 Guo C, Frank JS, Weinberger KQ (2020) Low Frequency Adversarial Perturbation. In: Paper presented at the proceedings of the thirty-fifth conference on uncertainty in artificial intelligence, Tel Aviv, Israel, July 22–25 2019
19.
Zurück zum Zitat Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, Sweden, pp 274–283 Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, Sweden, pp 274–283
20.
Zurück zum Zitat Uesato J, O’donoghue B, Kohli P, Oord A (2018) Adversarial risk and the dangers of evaluating against weak attacks. In: Dy JG, Krause A (eds) International conference on machine learning, vol 80. Stockholm, Sweden, pp 5025–5034 Uesato J, O’donoghue B, Kohli P, Oord A (2018) Adversarial risk and the dangers of evaluating against weak attacks. In: Dy JG, Krause A (eds) International conference on machine learning, vol 80. Stockholm, Sweden, pp 5025–5034
21.
Zurück zum Zitat Zhu Y, Wang F, Wan W, Zhang M (2021) Attack-guided efficient robustness verification of relu neural networks. In: 2021 international joint conference on neural networks, vol. 2021-July, pp. 1–8. IEEE, Virtual, Shenzhen, China Zhu Y, Wang F, Wan W, Zhang M (2021) Attack-guided efficient robustness verification of relu neural networks. In: 2021 international joint conference on neural networks, vol. 2021-July, pp. 1–8. IEEE, Virtual, Shenzhen, China
23.
Zurück zum Zitat Xue H, Zeng X, Lin W, Yang Z, Peng C, Zeng Z (2022) An rnn-based framework for the milp problem in robustness verification of neural networks. In: Proceedings of the Asian conference on computer vision, Macao, China, pp. 1842–1857 Xue H, Zeng X, Lin W, Yang Z, Peng C, Zeng Z (2022) An rnn-based framework for the milp problem in robustness verification of neural networks. In: Proceedings of the Asian conference on computer vision, Macao, China, pp. 1842–1857
24.
Zurück zum Zitat Tsay C, Kronqvist J, Thebelt A, Misener R (2021) Partition-based formulations for mixed-integer optimization of trained relu neural networks. In: Advances in neural information processing systems, vol. 4. Virtual, Online, pp. 3068–3080 Tsay C, Kronqvist J, Thebelt A, Misener R (2021) Partition-based formulations for mixed-integer optimization of trained relu neural networks. In: Advances in neural information processing systems, vol. 4. Virtual, Online, pp. 3068–3080
25.
Zurück zum Zitat Tjeng V, Xiao KY, Tedrake R (2019) Evaluating robustness of neural networks with mixed integer programming. In: Paper presented at the 7th international conference on learning representations, New Orleans, LA, USA Tjeng V, Xiao KY, Tedrake R (2019) Evaluating robustness of neural networks with mixed integer programming. In: Paper presented at the 7th international conference on learning representations, New Orleans, LA, USA
26.
Zurück zum Zitat Jia K, Rinard M (2020) Efficient exact verification of binarized neural networks. Adv Neural Inf Process Syst 33:1782–1795 Jia K, Rinard M (2020) Efficient exact verification of binarized neural networks. Adv Neural Inf Process Syst 33:1782–1795
27.
Zurück zum Zitat Henzinger TA, Lechner M, ikelic o (2021) Scalable verification of quantized neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35. Virtual, Online, pp. 3787–3795 Henzinger TA, Lechner M, ikelic o (2021) Scalable verification of quantized neural networks. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35. Virtual, Online, pp. 3787–3795
29.
Zurück zum Zitat Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu H, Zeljić A (2019) The marabou framework for verification and analysis of deep neural networks. In: Dillig I, Tasiran S (eds) International Conference on Computer Aided Verification, vol 11561. Springer, New York City, pp 443–452CrossRef Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu H, Zeljić A (2019) The marabou framework for verification and analysis of deep neural networks. In: Dillig I, Tasiran S (eds) International Conference on Computer Aided Verification, vol 11561. Springer, New York City, pp 443–452CrossRef
30.
Zurück zum Zitat Amir G, Wu H, Barrett C, Katz G (2021) An smt-based approach for verifying binarized neural networks. In: International conference on tools and algorithms for the construction and analysis of systems, Cham, pp. 203–222 Amir G, Wu H, Barrett C, Katz G (2021) An smt-based approach for verifying binarized neural networks. In: International conference on tools and algorithms for the construction and analysis of systems, Cham, pp. 203–222
31.
Zurück zum Zitat Wong E, Kolter Z (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, pp 5283–5292 Wong E, Kolter Z (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, pp 5283–5292
32.
Zurück zum Zitat Raghunathan A, Steinhardt J, Liang P (2018) Certified defenses against adversarial examples. In: Paper presented at the 6th international conference on learning representations, Vancouver, BC, Canada Raghunathan A, Steinhardt J, Liang P (2018) Certified defenses against adversarial examples. In: Paper presented at the 6th international conference on learning representations, Vancouver, BC, Canada
33.
Zurück zum Zitat Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) Ai2: safety and robustness certification of neural networks with abstract interpretation. In: Paper presented at the 2018 IEEE symposium on security and privacy, San Francisco, California, USA Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) Ai2: safety and robustness certification of neural networks with abstract interpretation. In: Paper presented at the 2018 IEEE symposium on security and privacy, San Francisco, California, USA
34.
Zurück zum Zitat Liu Y, Peng J, Chen L, Zheng Z (2020) Abstract interpretation based robustness certification for graph convolutional networks. In: ECAI 2020, Santiago de Compostela, Online, Spain, pp. 1309–1315 Liu Y, Peng J, Chen L, Zheng Z (2020) Abstract interpretation based robustness certification for graph convolutional networks. In: ECAI 2020, Santiago de Compostela, Online, Spain, pp. 1309–1315
35.
Zurück zum Zitat Singh G, Gehr T, Püschel M, Vechev M (2019) An abstract domain for certifying neural networks. Proc ACM Program Lang 3(POPL):1–30CrossRef Singh G, Gehr T, Püschel M, Vechev M (2019) An abstract domain for certifying neural networks. Proc ACM Program Lang 3(POPL):1–30CrossRef
36.
Zurück zum Zitat Li J, Liu J, Yang P, Chen L, Huang X, Zhang L (2019) Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang BE (ed) International Static Analysis Symposium, vol 11822. Springer, Porto, pp 296–319CrossRef Li J, Liu J, Yang P, Chen L, Huang X, Zhang L (2019) Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang BE (ed) International Static Analysis Symposium, vol 11822. Springer, Porto, pp 296–319CrossRef
37.
Zurück zum Zitat Urban C, Christakis M, Wüstholz V, Zhang F (2020) Perfectly parallel fairness certification of neural networks. Proc ACM Program Lang 4(OOPSLA):1–30CrossRef Urban C, Christakis M, Wüstholz V, Zhang F (2020) Perfectly parallel fairness certification of neural networks. Proc ACM Program Lang 4(OOPSLA):1–30CrossRef
38.
Zurück zum Zitat Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: Lang J (ed) Proceedings of the twenty-seventh international joint conference on artificial intelligence, vol. 2018-July. ijcai.org, Stockholm, Sweden, pp. 2651–2659 Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: Lang J (ed) Proceedings of the twenty-seventh international joint conference on artificial intelligence, vol. 2018-July. ijcai.org, Stockholm, Sweden, pp. 2651–2659
39.
Zurück zum Zitat Weng L, Zhang H, Chen H, Song Z, Hsieh C-J, Daniel L, Boning D, Dhillon I (2018) Towards fast computation of certified robustness for RELU networks. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, pp 5273–5282 Weng L, Zhang H, Chen H, Song Z, Hsieh C-J, Daniel L, Boning D, Dhillon I (2018) Towards fast computation of certified robustness for RELU networks. In: Dy JG, Krause A (eds) International Conference on Machine Learning, vol 80. PMLR, Stockholm, pp 5273–5282
40.
Zurück zum Zitat Latorre F, Rolland P, Cevher V (2020) Lipschitz constant estimation of neural networks via sparse polynomial optimization. In: Paper presented at the 8th international conference on learning representations, Addis Ababa, Ethiopia Latorre F, Rolland P, Cevher V (2020) Lipschitz constant estimation of neural networks via sparse polynomial optimization. In: Paper presented at the 8th international conference on learning representations, Addis Ababa, Ethiopia
42.
Zurück zum Zitat Biggio B, Corona I, Nelson B, Rubinstein BIP, Maiorca D, Fumera G, Giacinto G, Roli F (2014) Security evaluation of support vector machines in adversarial environments. Springer, ChamCrossRef Biggio B, Corona I, Nelson B, Rubinstein BIP, Maiorca D, Fumera G, Giacinto G, Roli F (2014) Security evaluation of support vector machines in adversarial environments. Springer, ChamCrossRef
43.
Zurück zum Zitat Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Blockeel H, Kersting K, Nijssen S, Zelezný F (eds) Machine learning and knowledge discovery in databases - european conference, vol 8190. Springer, Prague, pp 387–402 Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Blockeel H, Kersting K, Nijssen S, Zelezný F (eds) Machine learning and knowledge discovery in databases - european conference, vol 8190. Springer, Prague, pp 387–402
44.
Zurück zum Zitat Zhang F, Chan PP, Biggio B, Yeung DS, Roli F (2016) Adversarial feature selection against evasion attacks. IEEE T Cybern 46(3):766–777CrossRef Zhang F, Chan PP, Biggio B, Yeung DS, Roli F (2016) Adversarial feature selection against evasion attacks. IEEE T Cybern 46(3):766–777CrossRef
45.
Zurück zum Zitat Weerasinghe S, Alpcan T, Erfani SM, Leckie C (2021) Defending support vector machines against data poisoning attacks. IEEE Trans Inf Forensics Secur 16:2566–2578CrossRef Weerasinghe S, Alpcan T, Erfani SM, Leckie C (2021) Defending support vector machines against data poisoning attacks. IEEE Trans Inf Forensics Secur 16:2566–2578CrossRef
46.
Zurück zum Zitat Ranzato F, Zanella M (2019) Robustness verification of support vector machines. In: Chang BE (ed) International static analysis symposium, vol 11822. Springer, Porto, pp 271–295CrossRef Ranzato F, Zanella M (2019) Robustness verification of support vector machines. In: Chang BE (ed) International static analysis symposium, vol 11822. Springer, Porto, pp 271–295CrossRef
47.
Zurück zum Zitat Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on principles of programming languages Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on principles of programming languages
49.
Zurück zum Zitat LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
51.
Zurück zum Zitat Udell M, Boyd S (2013) Maximizing a sum of sigmoids. Optim Eng 1–25 Udell M, Boyd S (2013) Maximizing a sum of sigmoids. Optim Eng 1–25
52.
Zurück zum Zitat Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. MIT press, Cambridge Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. MIT press, Cambridge
53.
Zurück zum Zitat Ahuja RK, Magnanti TL, Orlin JB (1988) Network flows. Massachusetts Institute of Technology, Operations Research CenterCrossRef Ahuja RK, Magnanti TL, Orlin JB (1988) Network flows. Massachusetts Institute of Technology, Operations Research CenterCrossRef
54.
Zurück zum Zitat Boyd S, Xiao L, Mutapcic A (2004) Subgradient methods. Lecture notes of EE392o, Stanford University, Autumn Quarter 2004, 2004–2005 Boyd S, Xiao L, Mutapcic A (2004) Subgradient methods. Lecture notes of EE392o, Stanford University, Autumn Quarter 2004, 2004–2005
55.
Zurück zum Zitat Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Paper presented at the 3rd international conference on learning representations, San Diego, USA Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Paper presented at the 3rd international conference on learning representations, San Diego, USA
56.
Zurück zum Zitat Bunel R, De Palma A, Desmaison A, Dvijotham K, Kohli P, Torr P, Kumar MP (2020) Lagrangian decomposition for neural network verification. In: Conference on uncertainty in artificial intelligence, pp. 370–379. PMLR Bunel R, De Palma A, Desmaison A, Dvijotham K, Kohli P, Torr P, Kumar MP (2020) Lagrangian decomposition for neural network verification. In: Conference on uncertainty in artificial intelligence, pp. 370–379. PMLR
57.
Zurück zum Zitat Dubovitskii AY, Milyutin AA (1965) Extremum problems in the presence of restrictions. Zh Vychisl Mat Mat Fiz 5(3):395–453 Dubovitskii AY, Milyutin AA (1965) Extremum problems in the presence of restrictions. Zh Vychisl Mat Mat Fiz 5(3):395–453
Metadaten
Titel
Evaluating robustness of support vector machines with the Lagrangian dual approach
verfasst von
Yuting Liu
Hong Gu
Pan Qin
Publikationsdatum
26.02.2024
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 14/2024
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-024-09490-8

Weitere Artikel der Ausgabe 14/2024

Neural Computing and Applications 14/2024 Zur Ausgabe

Premium Partner