Skip to main content
Erschienen in: Multimedia Systems 4/2023

30.03.2023 | Regular Paper

Style matching CAPTCHA: match neural transferred styles to thwart intelligent attacks

verfasst von: Palash Ray, Asish Bera, Debasis Giri, Debotosh Bhattacharjee

Erschienen in: Multimedia Systems | Ausgabe 4/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Completely automated public turing test to tell computers and humans apart (CAPTCHA) is widely used to prevent malicious automated attacks on various online services. Text- and image-CAPTCHAs have shown broader acceptability due to usability and security factors. However, recent progress in deep learning implies that text-CAPTCHAs can easily be exposed to various fraudulent attacks. Thus, image-CAPTCHAs are getting research attention to enhance usability and security. In this work, the neural-style transfer (NST) is adapted for designing an image-CAPTCHA algorithm to enhance security while maintaining human performance. In NST-rendered image-CAPTCHAs, existing methods inquire a user to identify or localize the salient object (e.g., content) which is solvable effortlessly by off-the-shelf intelligent tools. Contrarily, we propose a Style Matching CAPTCHA (SMC) that asks a user to select the style image which is applied in the NST method. A user can solve a random SMC challenge by understanding the semantic correlation between the content and style output as a cue. The performance in solving SMC is evaluated based on the 1368 responses collected from 152 participants through a web-application. The average solving accuracy in three sessions is 95.61%; and the average response time for each challenge per user is 6.52 s, respectively. Likewise, a Smartphone Application (SMC-App) is devised using the proposed method. The average solving accuracy through SMC-App is 96.33%, and the average solving time is 5.13 s. To evaluate the vulnerability of SMC, deep learning-based attack schemes using Convolutional Neural Networks (CNN), such as ResNet-50 and Inception-v3 are simulated. The average accuracy of attacks considering various studies on SMC using ResNet-50 and Inception-v3 is 37%, which is improved over existing methods. Moreover, in-depth security analysis, experimental insights, and comparative studies imply the suitability of the proposed SMC.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
10.
Zurück zum Zitat Chen, H., Zhao, L., Wang, Z., Zhang, H., Zuo, Z., Li, A., Xing, W., Lu, D.: Dualast: Dual style-learning networks for artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 872–881 (2021) Chen, H., Zhao, L., Wang, Z., Zhang, H., Zuo, Z., Li, A., Xing, W., Lu, D.: Dualast: Dual style-learning networks for artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 872–881 (2021)
11.
Zurück zum Zitat Chen, H.Y., Fang, I., Cheng, C.M., Chiu, W.C., et al.: Self-contained stylization via steganography for reverse and serial style transfer. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2163–2171 (2020) Chen, H.Y., Fang, I., Cheng, C.M., Chiu, W.C., et al.: Self-contained stylization via steganography for reverse and serial style transfer. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2163–2171 (2020)
21.
Zurück zum Zitat Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414–2423 (2016) Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414–2423 (2016)
28.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)
41.
Zurück zum Zitat Lewis, J.: Fast normalized cross-correlation, 1995. In: Vision Interface, vol. 2010, pp. 120–123 (2010) Lewis, J.: Fast normalized cross-correlation, 1995. In: Vision Interface, vol. 2010, pp. 120–123 (2010)
43.
Zurück zum Zitat Liu, X.C., Yang, Y.L., Hall, P.: Learning to warp for style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3702–3711 (2021) Liu, X.C., Yang, Y.L., Hall, P.: Learning to warp for style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3702–3711 (2021)
44.
Zurück zum Zitat Ma, Y., Zhao, C., Li, X., Basu, A.: Rast: Restorable arbitrary style transfer via multi-restoration. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 331–340 (2023) Ma, Y., Zhao, C., Li, X., Basu, A.: Rast: Restorable arbitrary style transfer via multi-restoration. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 331–340 (2023)
46.
48.
Zurück zum Zitat Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples with applications to captcha. Cryptology ePrint Archive (2016) Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples with applications to captcha. Cryptology ePrint Archive (2016)
49.
Zurück zum Zitat Polakis, I., Ilia, P., Maggi, F., Lancini, M., Kontaxis, G., Zanero, S., Ioannidis, S., Keromytis, A.D.: Faces in the distorting mirror: Revisiting photo-based social authentication. In: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 501–512 (2014). https://doi.org/10.1145/2660267.2660317 Polakis, I., Ilia, P., Maggi, F., Lancini, M., Kontaxis, G., Zanero, S., Ioannidis, S., Keromytis, A.D.: Faces in the distorting mirror: Revisiting photo-based social authentication. In: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 501–512 (2014). https://​doi.​org/​10.​1145/​2660267.​2660317
50.
Zurück zum Zitat Rathor, V.S., Garg, B., Patil, M., Sharma, G.: Security analysis of image captcha using a mask r-cnn-based attack model. Int. J. Ad Hoc Ubiquitous Comput. 36(4), 238–247 (2021)CrossRef Rathor, V.S., Garg, B., Patil, M., Sharma, G.: Security analysis of image captcha using a mask r-cnn-based attack model. Int. J. Ad Hoc Ubiquitous Comput. 36(4), 238–247 (2021)CrossRef
52.
Zurück zum Zitat Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015)
56.
Zurück zum Zitat Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp. 618–626 (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp. 618–626 (2017)
57.
Zurück zum Zitat Shet, V.: Are you a robot? introducing no captcha recaptcha. Google Security Blog 3, 12 (2014) Shet, V.: Are you a robot? introducing no captcha recaptcha. Google Security Blog 3, 12 (2014)
63.
Zurück zum Zitat Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826 (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826 (2016)
67.
Zurück zum Zitat Uzun, E., Chung, S.P.H., Essa, I., Lee, W.: rtcaptcha: A real-time captcha based liveness detection system. In: NDSS (2018) Uzun, E., Chung, S.P.H., Essa, I., Lee, W.: rtcaptcha: A real-time captcha based liveness detection system. In: NDSS (2018)
68.
Zurück zum Zitat Von Ahn, L., Blum, M., Hopper, N.J., Langford, J.: Captcha: Using hard ai problems for security. In: International Conference on the Theory and Applications of Cryptographic Techniques, pp. 294–311. Springer (2003). https://doi.org/10.1007/3-540-39200-9 Von Ahn, L., Blum, M., Hopper, N.J., Langford, J.: Captcha: Using hard ai problems for security. In: International Conference on the Theory and Applications of Cryptographic Techniques, pp. 294–311. Springer (2003). https://​doi.​org/​10.​1007/​3-540-39200-9
80.
Zurück zum Zitat Zhang, K., Zheng, Y.: Information Security: 7th International Conference, ISC 2004, Palo Alto, CA, USA, September 27-29, 2004, Proceedings, vol. 3225. Springer (2004) Zhang, K., Zheng, Y.: Information Security: 7th International Conference, ISC 2004, Palo Alto, CA, USA, September 27-29, 2004, Proceedings, vol. 3225. Springer (2004)
83.
Zurück zum Zitat Zhu, B., Liu, J., Li, Q., Li, S., Xu, N.: Image-based captcha exploiting context in object recognition (2013). US Patent 8,483,518 Zhu, B., Liu, J., Li, Q., Li, S., Xu, N.: Image-based captcha exploiting context in object recognition (2013). US Patent 8,483,518
84.
Zurück zum Zitat Zhu, B.B., Yan, J., Li, Q., Yang, C., Liu, J., Xu, N., Yi, M., Cai, K.: Attacks and design of image recognition captchas. In: Proceedings of the 17th ACM conference on Computer and communications security, pp. 187–200. ACM (2010). https://doi.org/10.1145/1866307.1866329 Zhu, B.B., Yan, J., Li, Q., Yang, C., Liu, J., Xu, N., Yi, M., Cai, K.: Attacks and design of image recognition captchas. In: Proceedings of the 17th ACM conference on Computer and communications security, pp. 187–200. ACM (2010). https://​doi.​org/​10.​1145/​1866307.​1866329
Metadaten
Titel
Style matching CAPTCHA: match neural transferred styles to thwart intelligent attacks
verfasst von
Palash Ray
Asish Bera
Debasis Giri
Debotosh Bhattacharjee
Publikationsdatum
30.03.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 4/2023
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-023-01075-0

Weitere Artikel der Ausgabe 4/2023

Multimedia Systems 4/2023 Zur Ausgabe