Skip to main content
Top
Published in: Journal of Cryptographic Engineering 2/2020

30-11-2019 | Regular Paper

Deep learning for side-channel analysis and introduction to ASCAD database

Authors: Ryad Benadjila, Emmanuel Prouff, Rémi Strullu, Eleonora Cagli, Cécile Dumas

Published in: Journal of Cryptographic Engineering | Issue 2/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recent works have demonstrated that deep learning algorithms were efficient to conduct security evaluations of embedded systems and had many advantages compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies in some specific contexts. This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in machine learning and (2) it does not allow for the reproducibility of the presented results and (3) it does not allow to draw general conclusions. This paper aims to address these limitations in several ways. First, completing recent works, we propose a study of deep learning algorithms when applied in the context of side-channel analysis and we discuss the links with the classical template attacks. Secondly, for the first time, we address the question of the choice of the hyper-parameters for the class convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a challenging masked implementation of the AES algorithm. Interestingly, our work shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electromagnetic measurements exploited in our benchmarks. This open database, named ASCAD, is the first one in its category and it has been specified to serve as a common basis for further works on this subject.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
Some libraries (such as hyperopt or hyperas, [8]) could have been tested to automatize the search of accurate hyper-parameters in pre-defined sets. However, since they often perform a random search of the best parameters ([7]), they do not allow studying the impact of each hyper-parameter independently of the others on the side-channel attack success rate. Moreover, they have been defined to maximize classical machine learning evaluation metrics and not SCA ranking functions which require a batch of test traces.
 
2
We have validated that the code and the full project can be easily tested with the Chipwhisperer platform developed by C. O’ Flynn [52].
 
3
In Templates Attacks the profiling set and the attack set are assumed to be different, namely the traces \(\varvec{\ell }_{i}\) involved in (2) have not been used for the profiling.
 
4
The name generative is due to the fact that it is possible to generate synthetic traces by sampling from such probability distributions.
 
5
When no ambiguity is present we will call simply hyper-parameters the architecture ones.
 
6
We insist here on the fact that the model is trained from scratch at each iteration of the loop over t.
 
7
and also different values of \(k^{\star }\) if this is relevant for the attacked algorithm.
 
8
Another metric, the prediction error (PE), is sometimes used in combination with the accuracy: it is defined as the expected error of the model over the training sets; \(\mathsf {PE}_{N_{\text {train}}}(\hat{\mathbf {\text {g}}}_{}) = 1 - \mathsf {ACC}_{N_{\text {train}}}(\hat{\mathbf {\text {g}}}_{})\).
 
9
The SNR is sometimes named F-Test to refer to its original introduction by Fischer [18]. For a noisy observation \(L_{t}\) at time sample t of an event \(Z\), it is defined as \(\mathsf {Var}[\mathsf {E}_{}[L_{t}\mid Z ] ]/\mathsf {E}_{}[\mathsf {Var}[L_{t}\mid Z ] ]\).
 
10
Another possibility would have been to target \(\text {state0}[3] = \text {sbox}(p[3]\oplus k[3])\oplus r[3]\) which is manipulated at the end of Step 8] in Algorithm 1.
 
11
Note that some peaks appearing in Fig. 1b have not been selected.
 
12
They are called Fully-Connected because each i-th input coordinate is connected to each j-th output via the \(\mathbf{A}[i,j]\) weight. FC layers can be seen as a special case of the linear layers where not all the connections are necessarily present. The absence of some (ij) connections can be formalized as a constraint for the matrix \(\mathbf{A}\) consisting in forcing to 0 its (ij)-th coordinates.
 
13
Amount of units by which a filter shifts across the trace.
 
14
patches in the machine learning language.
 
15
Ambiguity: Neural networks with many layers are sometimes called Deep Neural Networks, where the depth corresponds to the number of layers.
 
16
To prevent underflow, the log-softmax is usually preferred if several classification outputs must be combined.
 
17
where each layer of the same type appearing in the composition is not to be intended as exactly the same function (e.g. with same input/output dimensions), but as a function of the same form.
 
18
Straightforwardly customized to apply on 1-dimensional inputs of 700 units and outputs of 256 units.
 
19
Leading to 10 training sets of size 45,000 and 10 test sets of size 5000 to perform the 10-fold cross-validation.
 
20
Having 50 epochs and a batch size equal to 50 is also a good trade-off, but between two options that seem equivalent, we chose to prefer the solution with the highest number of epochs.
 
21
For the sake of completeness, we have also tested the SCANet model introduced in [54]. This did not yield to good performances on our dataset: we have obtained a mean rank of approximatively 128 for each of our desynchronizations 0, 50 and 100.
 
22
Additionally, beware that in this paper training and testing are used in the context of cross-validation and are subsets of the profiling dataset \(\mathcal {D}_{\text {profiling}}\).
 
23
We recommend to perform the cross-validation only with the profiling set.
 
24
Stride pooling consists in taking the first value on each input window defined by the stride.
 
Literature
1.
go back to reference Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. Software available from tensorflow.org Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). https://​www.​tensorflow.​org/​. Software available from tensorflow.org
2.
go back to reference Akkar, M.L., Giraud, C.: An Implementation of DES and AES, Secure against Some Attacks. In: Ç. Koç, D., Naccache, D., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2001. Lecture Notes in Computer Science, vol. 2162, pp. 309–318. Springer, Berlin (2001)CrossRef Akkar, M.L., Giraud, C.: An Implementation of DES and AES, Secure against Some Attacks. In: Ç. Koç, D., Naccache, D., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2001. Lecture Notes in Computer Science, vol. 2162, pp. 309–318. Springer, Berlin (2001)CrossRef
5.
6.
go back to reference Bengio, Y., Grandvalet, Y.: Bias in estimating the variance of k-fold cross-validation. In: Duchesne, P., Rémillard, B. (eds.) Statistical modeling and analysis for complex data problems, pp. 75–95. Springer, Berlin (2005) Bengio, Y., Grandvalet, Y.: Bias in estimating the variance of k-fold cross-validation. In: Duchesne, P., Rémillard, B. (eds.) Statistical modeling and analysis for complex data problems, pp. 75–95. Springer, Berlin (2005)
7.
go back to reference Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13((Feb)), 281–305 (2012)MathSciNetMATH Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13((Feb)), 281–305 (2012)MathSciNetMATH
8.
go back to reference Bergstra, J., Yamins, D., Cox, D.D.: Hyperopt: a python library for optimizing the hyperparameters of machine learning algorithms. In: Proceedings of the 12th Python in Science Conference, pp. 13–20 (2013) Bergstra, J., Yamins, D., Cox, D.D.: Hyperopt: a python library for optimizing the hyperparameters of machine learning algorithms. In: Proceedings of the 12th Python in Science Conference, pp. 13–20 (2013)
9.
go back to reference Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)MATH Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)MATH
10.
go back to reference Breiman, L., et al.: Heuristics of instability and stabilization in model selection. Ann. Stat. 24(6), 2350–2383 (1996)MathSciNetCrossRef Breiman, L., et al.: Heuristics of instability and stabilization in model selection. Ann. Stat. 24(6), 2350–2383 (1996)MathSciNetCrossRef
11.
go back to reference Brier, E., Clavier, C., Olivier, F.: Correlation power analysis with a leakage model. In: Joye, M., Quisquater, J.J. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2004. Lecture Notes in Computer Science, vol. 3156, pp. 16–29. Springer, Berlin (2004)CrossRef Brier, E., Clavier, C., Olivier, F.: Correlation power analysis with a leakage model. In: Joye, M., Quisquater, J.J. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2004. Lecture Notes in Computer Science, vol. 3156, pp. 16–29. Springer, Berlin (2004)CrossRef
12.
go back to reference Cagli, E., Dumas, C., Prouff, E.: Kernel discriminant analysis for information extraction in the presence of masking. In: K. Lemke-Rust, M. Tunstall (eds.) Smart Card Research and Advanced Applications-15th International Conference, CARDIS 2016, Cannes, France, 7–9 November 2016, Revised Selected Papers, Lecture Notes in Computer Science, vol. 10146, pp. 1–22. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-54669-8_1 Cagli, E., Dumas, C., Prouff, E.: Kernel discriminant analysis for information extraction in the presence of masking. In: K. Lemke-Rust, M. Tunstall (eds.) Smart Card Research and Advanced Applications-15th International Conference, CARDIS 2016, Cannes, France, 7–9 November 2016, Revised Selected Papers, Lecture Notes in Computer Science, vol. 10146, pp. 1–22. Springer, Berlin (2016). https://​doi.​org/​10.​1007/​978-3-319-54669-8_​1
13.
go back to reference Cagli, E., Dumas, C., Prouff, E.: Convolutional neural networks with data augmentation against jitter-based countermeasures - profiling attacks without pre-processing. In: W. Fischer, N. Homma (eds.) Cryptographic Hardware and Embedded Systems-CHES 2017-19th International Conference, Taipei, Taiwan, September 25–28 2017, Proceedings, Lecture Notes in Computer Science, vol. 10529, pp. 45–68. Springer, Berlin (2017). https://doi.org/10.1007/978-3-319-66787-4_3 Cagli, E., Dumas, C., Prouff, E.: Convolutional neural networks with data augmentation against jitter-based countermeasures - profiling attacks without pre-processing. In: W. Fischer, N. Homma (eds.) Cryptographic Hardware and Embedded Systems-CHES 2017-19th International Conference, Taipei, Taiwan, September 25–28 2017, Proceedings, Lecture Notes in Computer Science, vol. 10529, pp. 45–68. Springer, Berlin (2017). https://​doi.​org/​10.​1007/​978-3-319-66787-4_​3
14.
go back to reference Chari, S., Rao, J., Rohatgi, P.: Template attacks. In: Kaliski Jr., B., Koç, Ç., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2002. Lecture Notes in Computer Science, vol. 2523, pp. 13–29. Springer, Berlin (2002)CrossRef Chari, S., Rao, J., Rohatgi, P.: Template attacks. In: Kaliski Jr., B., Koç, Ç., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2002. Lecture Notes in Computer Science, vol. 2523, pp. 13–29. Springer, Berlin (2002)CrossRef
17.
go back to reference Doget, J., Prouff, E., Rivain, M., Standaert, F.X.: Univariate side channel attacks and leakage modeling. J. Cryptogr. Eng. 1(2), 123–144 (2011)CrossRef Doget, J., Prouff, E., Rivain, M., Standaert, F.X.: Univariate side channel attacks and leakage modeling. J. Cryptogr. Eng. 1(2), 123–144 (2011)CrossRef
19.
go back to reference Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7(7), 179–188 (1936)CrossRef Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7(7), 179–188 (1936)CrossRef
20.
go back to reference Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer Series in Statistics, vol. 1. Springer, New York (2001)MATH Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer Series in Statistics, vol. 1. Springer, New York (2001)MATH
21.
go back to reference Gilmore, R., Hanley, N., O’Neill, M.: Neural network based attack on a masked implementation of AES. In: IEEE International Symposium on Hardware Oriented Security and Trust, HOST 2015, Washington, DC, USA, 5–7 May 2015, pp. 106–111. IEEE Computer Society (2015). https://doi.org/10.1109/HST.2015.7140247 Gilmore, R., Hanley, N., O’Neill, M.: Neural network based attack on a masked implementation of AES. In: IEEE International Symposium on Hardware Oriented Security and Trust, HOST 2015, Washington, DC, USA, 5–7 May 2015, pp. 106–111. IEEE Computer Society (2015). https://​doi.​org/​10.​1109/​HST.​2015.​7140247
22.
go back to reference Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
23.
go back to reference Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
24.
go back to reference Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)MATH Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)MATH
25.
go back to reference Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016)MATH Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016)MATH
28.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
29.
go back to reference Heuser, A., Zohner, M.: Intelligent machine homicide-breaking cryptographic devices using support vector machines. In: W. Schindler, S.A. Huss (eds.) Constructive Side-Channel Analysis and Secure Design-Third International Workshop, COSADE 2012, Darmstadt, Germany, 3–4 May 2012. Proceedings, Lecture Notes in Computer Science, vol. 7275, pp. 249–264. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-29912-4_18 Heuser, A., Zohner, M.: Intelligent machine homicide-breaking cryptographic devices using support vector machines. In: W. Schindler, S.A. Huss (eds.) Constructive Side-Channel Analysis and Secure Design-Third International Workshop, COSADE 2012, Darmstadt, Germany, 3–4 May 2012. Proceedings, Lecture Notes in Computer Science, vol. 7275, pp. 249–264. Springer, Berlin (2012). https://​doi.​org/​10.​1007/​978-3-642-29912-4_​18
31.
go back to reference Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR (2015). arXiv:1502.03167 Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR (2015). arXiv:​1502.​03167
32.
go back to reference Jarrett, K., Kavukcuoglu, K., LeCun, Y., et al.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2146–2153. IEEE (2009) Jarrett, K., Kavukcuoglu, K., LeCun, Y., et al.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2146–2153. IEEE (2009)
33.
34.
go back to reference Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) Advances in Cryptology-CRYPTO’99. Lecture Notes in Computer Science, vol. 1666, pp. 388–397. Springer, Berlin (1999) Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) Advances in Cryptology-CRYPTO’99. Lecture Notes in Computer Science, vol. 1666, pp. 388–397. Springer, Berlin (1999)
36.
go back to reference LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks 3361(10), 1995 (1995) LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks 3361(10), 1995 (1995)
37.
go back to reference LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRef LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRef
38.
go back to reference LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef
40.
go back to reference LeCun, Y., Huang, F.J.: Loss functions for discriminative training of energy-based models. In: R.G. Cowell, Z. Ghahramani (eds.) Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, 6–8 January 2005. Society for Artificial Intelligence and Statistics (2005). http://www.gatsby.ucl.ac.uk/aistats/fullpapers/207.pdf LeCun, Y., Huang, F.J.: Loss functions for discriminative training of energy-based models. In: R.G. Cowell, Z. Ghahramani (eds.) Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, 6–8 January 2005. Society for Artificial Intelligence and Statistics (2005). http://​www.​gatsby.​ucl.​ac.​uk/​aistats/​fullpapers/​207.​pdf
42.
43.
go back to reference Lerman, L., Poussier, R., Bontempi, G., Markowitch, O., Standaert, F.: Template attacks vs. machine learning revisited (and the curse of dimensionality in side-channel analysis). In: S. Mangard, A.Y. Poschmann (eds.) Constructive Side-Channel Analysis and Secure Design-6th International Workshop, COSADE 2015, Berlin, Germany, 13–14 April 2015. Revised Selected Papers, Lecture Notes in Computer Science, vol. 9064, pp. 20–33. Springer, Berlin (2015). https://doi.org/10.1007/978-3-319-21476-4_2 Lerman, L., Poussier, R., Bontempi, G., Markowitch, O., Standaert, F.: Template attacks vs. machine learning revisited (and the curse of dimensionality in side-channel analysis). In: S. Mangard, A.Y. Poschmann (eds.) Constructive Side-Channel Analysis and Secure Design-6th International Workshop, COSADE 2015, Berlin, Germany, 13–14 April 2015. Revised Selected Papers, Lecture Notes in Computer Science, vol. 9064, pp. 20–33. Springer, Berlin (2015). https://​doi.​org/​10.​1007/​978-3-319-21476-4_​2
44.
go back to reference Maghrebi, H., Portigliatti, T., Prouff, E.: Breaking cryptographic implementations using deep learning techniques. In: C. Carlet, M.A. Hasan, V. Saraswat (eds.) Security, Privacy, and Applied Cryptography Engineering-6th International Conference, SPACE 2016, Hyderabad, India, 14–18 December 2016. Proceedings, Lecture Notes in Computer Science, vol. 10076, pp. 3–26. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-49445-6_1 Maghrebi, H., Portigliatti, T., Prouff, E.: Breaking cryptographic implementations using deep learning techniques. In: C. Carlet, M.A. Hasan, V. Saraswat (eds.) Security, Privacy, and Applied Cryptography Engineering-6th International Conference, SPACE 2016, Hyderabad, India, 14–18 December 2016. Proceedings, Lecture Notes in Computer Science, vol. 10076, pp. 3–26. Springer, Berlin (2016). https://​doi.​org/​10.​1007/​978-3-319-49445-6_​1
45.
go back to reference Mangard, S., Pramstaller, N., Oswald, E.: Successfully attacking masked AES hardware implementations. In: Rao, J., Sunar, B. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2005. Lecture Notes in Computer Science, vol. 3659, pp. 157–171. Springer, Berlin (2005)CrossRef Mangard, S., Pramstaller, N., Oswald, E.: Successfully attacking masked AES hardware implementations. In: Rao, J., Sunar, B. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2005. Lecture Notes in Computer Science, vol. 3659, pp. 157–171. Springer, Berlin (2005)CrossRef
46.
go back to reference Martinasek, Z., Dzurenda, P., Malina, L.: Profiling power analysis attack based on MLP in DPA contest V4.2. In: 39th International Conference on Telecommunications and Signal Processing, TSP 2016, Vienna, Austria, 27–29 June 2016, pp. 223–226. IEEE (2016). https://doi.org/10.1109/TSP.2016.7760865 Martinasek, Z., Dzurenda, P., Malina, L.: Profiling power analysis attack based on MLP in DPA contest V4.2. In: 39th International Conference on Telecommunications and Signal Processing, TSP 2016, Vienna, Austria, 27–29 June 2016, pp. 223–226. IEEE (2016). https://​doi.​org/​10.​1109/​TSP.​2016.​7760865
47.
go back to reference Martinasek, Z., Hajny, J., Malina, L.: Optimization of power analysis using neural network. In: Francillon, A., Rohatgi, P. (eds.) Smart Card Research and Advanced Applications-12th International Conference, CARDIS 2013, Berlin, Germany, 27–29 November 2013. Revised Selected Papers, Lecture Notes in Computer Science, vol. 8419, pp. 94–107. Springer, Berlin. https://doi.org/10.1007/978-3-319-08302-5_7 Martinasek, Z., Hajny, J., Malina, L.: Optimization of power analysis using neural network. In: Francillon, A., Rohatgi, P. (eds.) Smart Card Research and Advanced Applications-12th International Conference, CARDIS 2013, Berlin, Germany, 27–29 November 2013. Revised Selected Papers, Lecture Notes in Computer Science, vol. 8419, pp. 94–107. Springer, Berlin. https://​doi.​org/​10.​1007/​978-3-319-08302-5_​7
48.
go back to reference Martinasek, Z., Malina, L., Trasy, K.: Profiling power analysis attack based on multi-layer perceptron network. Comput. Probl. Sci. Eng. 343, 317 (2015)CrossRef Martinasek, Z., Malina, L., Trasy, K.: Profiling power analysis attack based on multi-layer perceptron network. Comput. Probl. Sci. Eng. 343, 317 (2015)CrossRef
49.
go back to reference McAllester, D.A., Hazan, T., Keshet, J.: Direct loss minimization for structured prediction. In: J.D. Lafferty, C.K.I. Williams, J. Shawe-Taylor, R.S. Zemel, A. Culotta (eds.) Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a Meeting Held 6–9 December 2010, Vancouver, British Columbia, Canada, pp. 1594–1602. Curran Associates, Inc., Red Hook (2010). http://papers.nips.cc/paper/4069-direct-loss-minimization-for-structured-prediction McAllester, D.A., Hazan, T., Keshet, J.: Direct loss minimization for structured prediction. In: J.D. Lafferty, C.K.I. Williams, J. Shawe-Taylor, R.S. Zemel, A. Culotta (eds.) Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a Meeting Held 6–9 December 2010, Vancouver, British Columbia, Canada, pp. 1594–1602. Curran Associates, Inc., Red Hook (2010). http://​papers.​nips.​cc/​paper/​4069-direct-loss-minimization-for-structured-prediction
50.
go back to reference Messerges, T.: Using second-order power analysis to attack DPA resistant software. In: Koç, Ç., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2000. Lecture Notes in Computer Science, vol. 1965, pp. 238–251. Springer, Berlin (2000)CrossRef Messerges, T.: Using second-order power analysis to attack DPA resistant software. In: Koç, Ç., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems-CHES 2000. Lecture Notes in Computer Science, vol. 1965, pp. 238–251. Springer, Berlin (2000)CrossRef
51.
go back to reference Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Fürnkranz, J., Joachims, T. (eds.) Proceedings of the 27th International Conference on Machine Learning (ICML-10), 21–24 June 2010, Haifa, Israel, pp. 807–814. Omnipress, Madison (2010) Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Fürnkranz, J., Joachims, T. (eds.) Proceedings of the 27th International Conference on Machine Learning (ICML-10), 21–24 June 2010, Haifa, Israel, pp. 807–814. Omnipress, Madison (2010)
52.
go back to reference O’Flynn, C., Chen, Z.D.: Chipwhisperer: An open-source platform for hardware embedded security research. In: E. Prouff (ed.) Constructive Side-Channel Analysis and Secure Design-5th International Workshop, COSADE 2014, Paris, France, 13–15 April 2014. Revised Selected Papers, Lecture Notes in Computer Science, vol. 8622, pp. 243–260. Springer, Berlin (2014). https://doi.org/10.1007/978-3-319-10175-0_17 O’Flynn, C., Chen, Z.D.: Chipwhisperer: An open-source platform for hardware embedded security research. In: E. Prouff (ed.) Constructive Side-Channel Analysis and Secure Design-5th International Workshop, COSADE 2014, Paris, France, 13–15 April 2014. Revised Selected Papers, Lecture Notes in Computer Science, vol. 8622, pp. 243–260. Springer, Berlin (2014). https://​doi.​org/​10.​1007/​978-3-319-10175-0_​17
53.
go back to reference Pearson, K.: On lines and planes of closest fit to systems of points in space. Philos. Mag. 2(11), 559–572 (1901)CrossRef Pearson, K.: On lines and planes of closest fit to systems of points in space. Philos. Mag. 2(11), 559–572 (1901)CrossRef
55.
go back to reference Prouff, E., Rivain, M.: A generic method for secure SBox implementation. In: Kim, S., Yung, M., Lee, H.W. (eds.) WISA. Lecture Notes in Computer Science, vol. 4867, pp. 227–244. Springer, Berlin (2008) Prouff, E., Rivain, M.: A generic method for secure SBox implementation. In: Kim, S., Yung, M., Lee, H.W. (eds.) WISA. Lecture Notes in Computer Science, vol. 4867, pp. 227–244. Springer, Berlin (2008)
56.
go back to reference Rokach, L., Maimon, O.: Data Mining with Decision Trees: Theroy and Applications. World Scientific Publishing Co., Inc, River Edge (2008)MATH Rokach, L., Maimon, O.: Data Mining with Decision Trees: Theroy and Applications. World Scientific Publishing Co., Inc, River Edge (2008)MATH
57.
go back to reference Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRef Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRef
58.
go back to reference Schindler, W.: Advanced stochastic methods in side channel analysis on block ciphers in the presence of masking. J. Math. Cryptol. 2, 291–310 (2008)MathSciNetCrossRef Schindler, W.: Advanced stochastic methods in side channel analysis on block ciphers in the presence of masking. J. Math. Cryptol. 2, 291–310 (2008)MathSciNetCrossRef
59.
go back to reference Schindler, W., Lemke, K., Paar, C.: A Stochastic model for differential side channel cryptanalysis. In: Rao, J., Sunar, B. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2005. Lecture Notes in Computer Science, vol. 3659. Springer, Berlin (2005) Schindler, W., Lemke, K., Paar, C.: A Stochastic model for differential side channel cryptanalysis. In: Rao, J., Sunar, B. (eds.) Cryptographic Hardware and Embedded Systems–CHES 2005. Lecture Notes in Computer Science, vol. 3659. Springer, Berlin (2005)
60.
go back to reference Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556 Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:​1409.​1556
61.
go back to reference Song, Y., Schwing, A.G., Zemel, R.S., Urtasun, R.: Direct loss minimization for training deep neural nets. CoRR (2015). arXiv:1511.06411 Song, Y., Schwing, A.G., Zemel, R.S., Urtasun, R.: Direct loss minimization for training deep neural nets. CoRR (2015). arXiv:​1511.​06411
62.
go back to reference Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
63.
go back to reference Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
64.
go back to reference Weston, J., Watkins, C.: Multi-class support vector machines. Technical Report CSD-TR-98-04, Royal Holloway, University of London (1998) Weston, J., Watkins, C.: Multi-class support vector machines. Technical Report CSD-TR-98-04, Royal Holloway, University of London (1998)
65.
go back to reference Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, pp. 818–833. Springer (2014) Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, pp. 818–833. Springer (2014)
Metadata
Title
Deep learning for side-channel analysis and introduction to ASCAD database
Authors
Ryad Benadjila
Emmanuel Prouff
Rémi Strullu
Eleonora Cagli
Cécile Dumas
Publication date
30-11-2019
Publisher
Springer Berlin Heidelberg
Published in
Journal of Cryptographic Engineering / Issue 2/2020
Print ISSN: 2190-8508
Electronic ISSN: 2190-8516
DOI
https://doi.org/10.1007/s13389-019-00220-8

Other articles of this Issue 2/2020

Journal of Cryptographic Engineering 2/2020 Go to the issue

Premium Partner