Skip to main content
Erschienen in: Multimedia Systems 1/2022

18.05.2021 | Regular Paper

Code generation from a graphical user interface via attention-based encoder–decoder model

verfasst von: Wen-Yin Chen, Pavol Podstreleny, Wen-Huang Cheng, Yung-Yao Chen, Kai-Lung Hua

Erschienen in: Multimedia Systems | Ausgabe 1/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Code generation from graphical user interface images is a promising area of research. Recent progress on machine learning methods made it possible to transform user interface into the code using several methods. The encoder–decoder framework represents one of the possible ways to tackle code generation tasks. Our model implements the encoder–decoder framework with an attention mechanism that helps the decoder to focus on a subset of salient image features when needed. Our attention mechanism also helps the decoder to generate token sequences with higher accuracy. Experimental results show that our model outperforms previously proposed models on the pix2code benchmark dataset.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
3.
Zurück zum Zitat Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A.C., Salakhutdinov, R., Zemel, R.S., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. CoRR arXiv:1502.03044 (2015) Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A.C., Salakhutdinov, R., Zemel, R.S., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. CoRR arXiv:​1502.​03044 (2015)
4.
Zurück zum Zitat Liu, Y., Hu, Q., Shu, K.: Improving pix2code based bi-directional lstm. 2018 IEEE international conference on automation, electronics and electrical engineering (AUTEEE) p. 220–223 (2018) Liu, Y., Hu, Q., Shu, K.: Improving pix2code based bi-directional lstm. 2018 IEEE international conference on automation, electronics and electrical engineering (AUTEEE) p. 220–223 (2018)
5.
Zurück zum Zitat Zhu, Z., Xue, Z., Yuan, Z.: Automatic graphics program generation using attention-based hierarchical decoder. CoRR arXiv:1810.11536 (2018) Zhu, Z., Xue, Z., Yuan, Z.: Automatic graphics program generation using attention-based hierarchical decoder. CoRR arXiv:​1810.​11536 (2018)
6.
Zurück zum Zitat Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2014) Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2014)
7.
Zurück zum Zitat Tang, H.L., Chien, S.C., Cheng, W.H., Chen, Y.Y., Hua, K.L.: Multi-cue pedestrian detection from 3d point cloud data. In: 2017 IEEE international conference on multimedia and expo (ICME), pp. 1279–1284. IEEE (2017) Tang, H.L., Chien, S.C., Cheng, W.H., Chen, Y.Y., Hua, K.L.: Multi-cue pedestrian detection from 3d point cloud data. In: 2017 IEEE international conference on multimedia and expo (ICME), pp. 1279–1284. IEEE (2017)
8.
Zurück zum Zitat Hua, K.L., Hidayati, S.C., He, F.L., Wei, C.P., Wang, Y.C.F.: Context-aware joint dictionary learning for color image demosaicking. J. Vis. Commun. Image Represent. 38, 230–245 (2016)CrossRef Hua, K.L., Hidayati, S.C., He, F.L., Wei, C.P., Wang, Y.C.F.: Context-aware joint dictionary learning for color image demosaicking. J. Vis. Commun. Image Represent. 38, 230–245 (2016)CrossRef
9.
Zurück zum Zitat Tan, D.S., Chen, W.Y., Hua, K.L.: Deepdemosaicking: adaptive image demosaicking via multiple deep fully convolutional networks. IEEE Trans. Image Process. 27(5), 2408–2419 (2018)MathSciNetCrossRef Tan, D.S., Chen, W.Y., Hua, K.L.: Deepdemosaicking: adaptive image demosaicking via multiple deep fully convolutional networks. IEEE Trans. Image Process. 27(5), 2408–2419 (2018)MathSciNetCrossRef
10.
Zurück zum Zitat Sanchez-Riera, J., Hua, K.L., Hsiao, Y.S., Lim, T., Hidayati, S.C., Cheng, W.H.: A comparative study of data fusion for rgb-d based visual recognition. Pattern Recogn. Lett. 73, 1–6 (2016)CrossRef Sanchez-Riera, J., Hua, K.L., Hsiao, Y.S., Lim, T., Hidayati, S.C., Cheng, W.H.: A comparative study of data fusion for rgb-d based visual recognition. Pattern Recogn. Lett. 73, 1–6 (2016)CrossRef
11.
Zurück zum Zitat Hidayati, S.C., Hua, K.L., Cheng, W.H., Sun, S.W.: What are the fashion trends in new york? In: Proceedings of the 22nd ACM international conference on multimedia, pp. 197–200 (2014) Hidayati, S.C., Hua, K.L., Cheng, W.H., Sun, S.W.: What are the fashion trends in new york? In: Proceedings of the 22nd ACM international conference on multimedia, pp. 197–200 (2014)
12.
Zurück zum Zitat Sharma, V., Srinivasan, K., Chao, H.C., Hua, K.L., Cheng, W.H.: Intelligent deployment of uavs in 5g heterogeneous communication environment for improved coverage. J. Netw. Comput. Appl. 85, 94–105 (2017)CrossRef Sharma, V., Srinivasan, K., Chao, H.C., Hua, K.L., Cheng, W.H.: Intelligent deployment of uavs in 5g heterogeneous communication environment for improved coverage. J. Netw. Comput. Appl. 85, 94–105 (2017)CrossRef
13.
Zurück zum Zitat Chen, X., Zitnick, C.L.: Learning a recurrent visual representation for image caption generation. CoRR arXiv:1411.5654 (2014) Chen, X., Zitnick, C.L.: Learning a recurrent visual representation for image caption generation. CoRR arXiv:​1411.​5654 (2014)
14.
Zurück zum Zitat Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (m-rnn) (2015) Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (m-rnn) (2015)
15.
Zurück zum Zitat Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Chua, T.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. CoRR arxiv:1611.05594 (2016) Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Chua, T.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. CoRR arxiv:​1611.​05594 (2016)
16.
Zurück zum Zitat Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via A visual sentinel for image captioning. CoRR arXiv:1612.01887 (2016) Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via A visual sentinel for image captioning. CoRR arXiv:​1612.​01887 (2016)
17.
Zurück zum Zitat Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRef Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRef
19.
Zurück zum Zitat Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR arXiv:1406.1078 (2014) Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR arXiv:​1406.​1078 (2014)
20.
Zurück zum Zitat Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. CoRR arXiv:1508.04025 (2015) Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. CoRR arXiv:​1508.​04025 (2015)
21.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)
22.
Zurück zum Zitat Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Li, F.: Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575 (2014) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Li, F.: Imagenet large scale visual recognition challenge. CoRR arXiv:​1409.​0575 (2014)
Metadaten
Titel
Code generation from a graphical user interface via attention-based encoder–decoder model
verfasst von
Wen-Yin Chen
Pavol Podstreleny
Wen-Huang Cheng
Yung-Yao Chen
Kai-Lung Hua
Publikationsdatum
18.05.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 1/2022
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-021-00804-7

Weitere Artikel der Ausgabe 1/2022

Multimedia Systems 1/2022 Zur Ausgabe

Neuer Inhalt