ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

Authors

  • Peyman Passban Amazon
  • Yimeng Wu Huawei Noah’s Ark Lab
  • Mehdi Rezagholizadeh Huawei Noah’s Ark Lab
  • Qun Liu Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v35i15.17610

Keywords:

Language Models

Abstract

Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training. The teacher network is supposed to be a trustworthy predictor and the student tries to mimic its predictions. Usually, a student with a lighter architecture is selected so we can achieve compression and yet deliver high-quality results. In such a setting, distillation only happens for final predictions whereas the student could also benefit from teacher’s supervision for internal components. Motivated by this, we studied the problem of distillation for intermediate layers. Since there might not be a one-to-one alignment between student and teacher layers, existing techniques skip some teacher layers and only distill from a subset of them. This shortcoming directly impacts quality, so we instead propose a combinatorial technique which relies on attention. Our model fuses teacher-side information and takes each layer’s significance into consideration, then it performs distillation between combined teacher layers and those of the student. Using our technique, we distilled a 12-layer BERT (Devlin et al. 2019) into 6-, 4-, and 2-layer counterparts and evaluated them on GLUE tasks (Wang et al. 2018). Experimental results show that our combinatorial approach is able to outperform other existing techniques.

Downloads

Published

2021-05-18

How to Cite

Passban, P., Wu, Y., Rezagholizadeh, M., & Liu, Q. (2021). ALP-KD: Attention-Based Layer Projection for Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13657-13665. https://doi.org/10.1609/aaai.v35i15.17610

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II