Skip to main content
Top
Published in: Neural Computing and Applications 14/2024

02-03-2024 | Original Article

On the adversarial robustness of generative autoencoders in the latent space

Authors: Mingfei Lu, Badong Chen

Published in: Neural Computing and Applications | Issue 14/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The generative autoencoders, such as the variational autoencoders or the adversarial autoencoders, have achieved great success in lots of real-world applications, including image generation and signal communication. However, little concern has been devoted to their robustness during practical deployment. Due to the probabilistic latent structure, variational autoencoders (VAEs) may confront problems such as a mismatch between the posterior distribution of the latent and real data manifold, or discontinuity in the posterior distribution of the latent. This leaves a back door for malicious attackers to collapse VAEs from the latent space, especially in scenarios where the encoder and decoder are used separately, such as communication and compressed sensing. In this work, we provide the first study on the adversarial robustness of generative autoencoders in the latent space. Specifically, we empirically demonstrate the latent vulnerability of popular generative autoencoders through attacks in the latent space. We also evaluate the difference between variational autoencoders and their deterministic variants and observe that the latter performs better in latent robustness. Meanwhile, we identify a potential trade-off between the adversarial robustness and the degree of the disentanglement of the latent codes. Additionally, we also verify the feasibility of improvement for the latent robustness of generative autoencoders through adversarial training. In summary, we suggest concerning the adversarial latent robustness of the generative autoencoders, analyze several robustness-relative issues, and give some insights into a series of key challenges.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
2.
go back to reference Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: International conference on machine learning, pp 1278–1286. PMLR Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: International conference on machine learning, pp 1278–1286. PMLR
4.
go back to reference Ghosh P, Sajjadi MS, Vergari A, Black M (2020) From variational to deterministic autoencoders. In: 8th international conference on learning representations, pp 1–25 Ghosh P, Sajjadi MS, Vergari A, Black M (2020) From variational to deterministic autoencoders. In: 8th international conference on learning representations, pp 1–25
5.
go back to reference Zhou L, Cai C, Gao Y, Su S, Wu J (2018) Variational autoencoder for low bit-rate image compression. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 2617–2620 Zhou L, Cai C, Gao Y, Su S, Wu J (2018) Variational autoencoder for low bit-rate image compression. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 2617–2620
6.
go back to reference Liu Z-S, Siu W-C, Wang L-W (2021) Variational autoencoder for reference based image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 516–525 Liu Z-S, Siu W-C, Wang L-W (2021) Variational autoencoder for reference based image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 516–525
7.
go back to reference Jang M, Seo S, Kang P (2019) Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning. Inf Sci 490:59–73CrossRef Jang M, Seo S, Kang P (2019) Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning. Inf Sci 490:59–73CrossRef
8.
go back to reference Semeniuta S, Severyn A, Barth E (2017) A hybrid convolutional variational autoencoder for text generation. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 627–637 Semeniuta S, Severyn A, Barth E (2017) A hybrid convolutional variational autoencoder for text generation. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 627–637
9.
go back to reference Li L, Yan J, Wang H, Jin Y (2020) Anomaly detection of time series with smoothness-inducing sequential variational auto-encoder. IEEE Transact Neural Netw Learn Syst 32(3):1177–1191CrossRef Li L, Yan J, Wang H, Jin Y (2020) Anomaly detection of time series with smoothness-inducing sequential variational auto-encoder. IEEE Transact Neural Netw Learn Syst 32(3):1177–1191CrossRef
10.
go back to reference Jin X-B, Gong W-T, Kong J-L, Bai Y-T, Su T-L (2022) Pfvae: a planar flow-based variational auto-encoder prediction model for time series data. Mathematics 10(4):610CrossRef Jin X-B, Gong W-T, Kong J-L, Bai Y-T, Su T-L (2022) Pfvae: a planar flow-based variational auto-encoder prediction model for time series data. Mathematics 10(4):610CrossRef
11.
go back to reference Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on mnist. In: Seventh international conference on learning representations (ICLR 2019), pp 1–16 Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on mnist. In: Seventh international conference on learning representations (ICLR 2019), pp 1–16
12.
go back to reference Willetts MJ, Camuto A, Rainforth T, Roberts S, Holmes CC (2020) Improving vaes’ robustness to adversarial attack. In: International conference on learning representations, pp 1–10 Willetts MJ, Camuto A, Rainforth T, Roberts S, Holmes CC (2020) Improving vaes’ robustness to adversarial attack. In: International conference on learning representations, pp 1–10
13.
15.
go back to reference Connor M, Canal G, Rozell C (2021) Variational autoencoder with learned latent structure. In: International conference on artificial intelligence and statistics, pp 2359–2367. PMLR Connor M, Canal G, Rozell C (2021) Variational autoencoder with learned latent structure. In: International conference on artificial intelligence and statistics, pp 2359–2367. PMLR
16.
go back to reference Lucas J, Tucker G, Grosse R, Norouzi M (2019) Understanding posterior collapse in generative latent variable models. In: Deep generative models for highly structured data, DGS@ICLR 2019 Workshop, pp. 1–16 Lucas J, Tucker G, Grosse R, Norouzi M (2019) Understanding posterior collapse in generative latent variable models. In: Deep generative models for highly structured data, DGS@ICLR 2019 Workshop, pp. 1–16
17.
go back to reference Doi E, Lewicki MS (2011) Characterization of minimum error linear coding with sensory and neural noise. Neural Comput 23(10):2498–2510MathSciNetCrossRef Doi E, Lewicki MS (2011) Characterization of minimum error linear coding with sensory and neural noise. Neural Comput 23(10):2498–2510MathSciNetCrossRef
18.
go back to reference Liu J, Zhao H, Ma D, Mei K, Wei J (2022) Opening the black box of deep neural networks in physical layer communication. In: 2022 IEEE wireless communications and networking conference (WCNC), pp 435–440. IEEE Liu J, Zhao H, Ma D, Mei K, Wei J (2022) Opening the black box of deep neural networks in physical layer communication. In: 2022 IEEE wireless communications and networking conference (WCNC), pp 435–440. IEEE
19.
go back to reference Kos J, Fischer I, Song D (2018) Adversarial examples for generative models. In: 2018 IEEE security and privacy workshops (spw), pp 36–42. IEEE Kos J, Fischer I, Song D (2018) Adversarial examples for generative models. In: 2018 IEEE security and privacy workshops (spw), pp 36–42. IEEE
20.
go back to reference Ghosh P, Losalka A, Black MJ (2019) Resisting adversarial attacks using gaussian mixture variational autoencoders. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 541–548 Ghosh P, Losalka A, Black MJ (2019) Resisting adversarial attacks using gaussian mixture variational autoencoders. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 541–548
21.
go back to reference Sun C, Chen S, Cai J, Huang X (2020) Type i attack for generative models. In: 2020 IEEE international conference on image processing (ICIP), pp 593–597. IEEE Sun C, Chen S, Cai J, Huang X (2020) Type i attack for generative models. In: 2020 IEEE international conference on image processing (ICIP), pp 593–597. IEEE
22.
go back to reference Kuzina A, Welling M, Tomczak JM (2021) Diagnosing vulnerability of variational auto-encoders to adversarial attacks. arXiv preprint arXiv:2103.06701 Kuzina A, Welling M, Tomczak JM (2021) Diagnosing vulnerability of variational auto-encoders to adversarial attacks. arXiv preprint arXiv:​2103.​06701
23.
go back to reference Osada G, Ahsan B, Bora RP, Nishide T (2020) Regularization with latent space virtual adversarial training. In: European conference on computer vision, pp 565–581. Springer Osada G, Ahsan B, Bora RP, Nishide T (2020) Regularization with latent space virtual adversarial training. In: European conference on computer vision, pp 565–581. Springer
24.
go back to reference Yu Y, Gao X, Xu C-Z (2021) Lafeat: Piercing through adversarial defenses with latent features. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5735–5745 Yu Y, Gao X, Xu C-Z (2021) Lafeat: Piercing through adversarial defenses with latent features. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5735–5745
25.
go back to reference Park GY, Lee SW (2021) Reliably fast adversarial training via latent adversarial perturbation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 7758–7767 Park GY, Lee SW (2021) Reliably fast adversarial training via latent adversarial perturbation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 7758–7767
26.
go back to reference Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A (2017) Beta-vae: Learning basic visual concepts with a constrained variational framework. In: 5th International conference on learning representations, ICLR 2017–conference track proceedings, pp 1–22 Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A (2017) Beta-vae: Learning basic visual concepts with a constrained variational framework. In: 5th International conference on learning representations, ICLR 2017–conference track proceedings, pp 1–22
27.
go back to reference Chen RT, Li X, Grosse RB, Duvenaud DK (2018) Isolating sources of disentanglement in variational autoencoders. Adv Neural Inf Process Syst 31 Chen RT, Li X, Grosse RB, Duvenaud DK (2018) Isolating sources of disentanglement in variational autoencoders. Adv Neural Inf Process Syst 31
28.
29.
go back to reference Gretton A, Borgwardt KM, Rasch MJ, Schölkopf B, Smola A (2012) A kernel two-sample test. J Mach Learn Res 13(1):723–773MathSciNet Gretton A, Borgwardt KM, Rasch MJ, Schölkopf B, Smola A (2012) A kernel two-sample test. J Mach Learn Res 13(1):723–773MathSciNet
30.
go back to reference Principe JC (2010) Information theoretic learning: Renyi’s entropy and kernel perspectives. Springer, NYCrossRef Principe JC (2010) Information theoretic learning: Renyi’s entropy and kernel perspectives. Springer, NYCrossRef
31.
go back to reference Santana E, Emigh M, Principe JC (2016) Information theoretic-learning auto-encoder. In: 2016 international joint conference on neural networks (IJCNN), pp 3296–3301. IEEE Santana E, Emigh M, Principe JC (2016) Information theoretic-learning auto-encoder. In: 2016 international joint conference on neural networks (IJCNN), pp 3296–3301. IEEE
32.
go back to reference Yu S, Principe JC (2019) Understanding autoencoders with information theoretic concepts. Neural Netw 117:104–123CrossRef Yu S, Principe JC (2019) Understanding autoencoders with information theoretic concepts. Neural Netw 117:104–123CrossRef
33.
34.
go back to reference Royston P (1992) Approximating the shapiro-wilk w-test for non-normality. Stat Comput 2(3):117–119CrossRef Royston P (1992) Approximating the shapiro-wilk w-test for non-normality. Stat Comput 2(3):117–119CrossRef
35.
go back to reference Oussidi A, Elhassouny A (2018) Deep generative models: survey. In: 2018 International conference on intelligent systems and computer vision (ISCV), pp 1–8. IEEE Oussidi A, Elhassouny A (2018) Deep generative models: survey. In: 2018 International conference on intelligent systems and computer vision (ISCV), pp 1–8. IEEE
36.
go back to reference Ding Y, Shi Y, Chen B, Lin C, Lu H, Li J, Tang R, Wang D (2021) Semi-deterministic and contrastive variational graph autoencoder for recommendation. In: Proceedings of the 30th ACM international conference on information & knowledge management, pp 382–391 Ding Y, Shi Y, Chen B, Lin C, Lu H, Li J, Tang R, Wang D (2021) Semi-deterministic and contrastive variational graph autoencoder for recommendation. In: Proceedings of the 30th ACM international conference on information & knowledge management, pp 382–391
37.
go back to reference Wu C, Wang PZ, Wang WY (2019) Couple-VAE: mitigating the encoder-decoder incompatibility in variational text modeling with coupled deterministic networks. ICLR 2019 under review, https://openreview.net/ Wu C, Wang PZ, Wang WY (2019) Couple-VAE: mitigating the encoder-decoder incompatibility in variational text modeling with coupled deterministic networks. ICLR 2019 under review, https://​openreview.​net/​
38.
go back to reference Polykovskiy D, Vetrov D (2020) Deterministic decoding for discrete data in variational autoencoders. In: International conference on artificial intelligence and statistics, pp 3046–3056. PMLR Polykovskiy D, Vetrov D (2020) Deterministic decoding for discrete data in variational autoencoders. In: International conference on artificial intelligence and statistics, pp 3046–3056. PMLR
39.
go back to reference Norouzi S, Fleet DJ, Norouzi M (2020) Exemplar vae: linking generative models, nearest neighbor retrieval, and data augmentation. Adv Neural Inf Process Syst 33:8753–8764 Norouzi S, Fleet DJ, Norouzi M (2020) Exemplar vae: linking generative models, nearest neighbor retrieval, and data augmentation. Adv Neural Inf Process Syst 33:8753–8764
40.
41.
go back to reference Berthelot D, Raffel C, Roy A, Goodfellow I (2018) Understanding and improving interpolation in autoencoders via an adversarial regularizer. In: International conference on learning representations, pp 1–19 Berthelot D, Raffel C, Roy A, Goodfellow I (2018) Understanding and improving interpolation in autoencoders via an adversarial regularizer. In: International conference on learning representations, pp 1–19
42.
go back to reference Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations, pp 1–28 Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations, pp 1–28
43.
go back to reference Hore A, Ziou D (2010) Image quality metrics: Psnr vs. ssim. In: 2010 20th international conference on pattern recognition, pp 2366–2369. IEEE Hore A, Ziou D (2010) Image quality metrics: Psnr vs. ssim. In: 2010 20th international conference on pattern recognition, pp 2366–2369. IEEE
44.
go back to reference Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transact Image Process 13(4):600–612CrossRef Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transact Image Process 13(4):600–612CrossRef
45.
go back to reference Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Adv Neural Inf Process Syst 29 Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Adv Neural Inf Process Syst 29
46.
go back to reference Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Process Syst 30 Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Process Syst 30
47.
go back to reference Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595 Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595
48.
go back to reference LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
49.
go back to reference Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:​1708.​07747
50.
go back to reference Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision, pp 3730–3738 Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision, pp 3730–3738
51.
go back to reference Maaten L, Hinton G (2008) Visualizing data using T-SNE. J Mach Learn Res 9(11):2579–2605 Maaten L, Hinton G (2008) Visualizing data using T-SNE. J Mach Learn Res 9(11):2579–2605
52.
go back to reference Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Transact Pattern Anal Mach Intell 35(8):1798–1828CrossRef Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Transact Pattern Anal Mach Intell 35(8):1798–1828CrossRef
53.
go back to reference Willetts M, Camuto A, Roberts S, Holmes C (2019) Disentangling improves vaes’ robustness to adversarial attacks. arXiv preprint arXiv:1906.00230, 2019. 2 Willetts M, Camuto A, Roberts S, Holmes C (2019) Disentangling improves vaes’ robustness to adversarial attacks. arXiv preprint arXiv:​1906.​00230, 2019. 2
Metadata
Title
On the adversarial robustness of generative autoencoders in the latent space
Authors
Mingfei Lu
Badong Chen
Publication date
02-03-2024
Publisher
Springer London
Published in
Neural Computing and Applications / Issue 14/2024
Print ISSN: 0941-0643
Electronic ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-024-09438-y

Other articles of this Issue 14/2024

Neural Computing and Applications 14/2024 Go to the issue

Premium Partner