Skip to main content
Top
Published in: Journal of Intelligent Manufacturing 1/2021

31-03-2020

Fully convolutional networks for chip-wise defect detection employing photoluminescence images

Efficient quality control in LED manufacturing

Authors: Maike Lorena Stern, Martin Schellenberger

Published in: Journal of Intelligent Manufacturing | Issue 1/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Efficient quality control is inevitable in the manufacturing of light-emitting diodes (LEDs). Because defective LED chips may be traced back to different causes, a time and cost-intensive electrical and optical contact measurement is employed. Fast photoluminescence measurements, on the other hand, are commonly used to detect wafer separation damages but also hold the potential to enable an efficient detection of all kinds of defective LED chips. On a photoluminescence image, every pixel corresponds to an LED chip’s brightness after photoexcitation, revealing performance information. But due to unevenly distributed brightness values and varying defect patterns, photoluminescence images are not yet employed for a comprehensive defect detection. In this work, we show that fully convolutional networks can be used for chip-wise defect detection, trained on a small data-set of photoluminescence images. Pixel-wise labels allow us to classify each and every chip as defective or not. Being measurement-based, labels are easy to procure and our experiments show that existing discrepancies between training images and labels do not hinder network training. Using weighted loss calculation, we were able to equalize our highly unbalanced class categories. Due to the consistent use of skip connections and residual shortcuts, our network is able to predict a variety of structures, from extensive defect clusters up to single defective LED chips.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
go back to reference Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/, software available from tensorflow.org. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. https://​www.​tensorflow.​org/​, software available from tensorflow.org.
go back to reference Abdulnabi, A. H., Winkler, S., & Wang, G. (2017). Beyond forward shortcuts: Fully convolutional master-slave networks (msnets) with backward skip connections for semantic segmentation. CoRR arXiv:1707.05537. Abdulnabi, A. H., Winkler, S., & Wang, G. (2017). Beyond forward shortcuts: Fully convolutional master-slave networks (msnets) with backward skip connections for semantic segmentation. CoRR arXiv:​1707.​05537.
go back to reference Chen, L., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. CoRR arXiv:1706.05587. Chen, L., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. CoRR arXiv:​1706.​05587.
go back to reference Demant, M., Glatthaar, M., Haufschild, J., & Rein, S. (2010). Analysis of luminescence images applying pattern recognition techniques. In 25th European PV solar energy conference and exhibition (pp. 1078–1082). Demant, M., Glatthaar, M., Haufschild, J., & Rein, S. (2010). Analysis of luminescence images applying pattern recognition techniques. In 25th European PV solar energy conference and exhibition (pp. 1078–1082).
go back to reference He, K., Zhang, X., Ren, S., & Sun, J. (2015c). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR arXiv:1502.01852. He, K., Zhang, X., Ren, S., & Sun, J. (2015c). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR arXiv:​1502.​01852.
go back to reference Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR arXiv:1502.03167. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR arXiv:​1502.​03167.
go back to reference Jégou, S., Drozdzal, M., Vázquez, D., Romero, A., & Bengio, Y. (2016). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. CoRR arXiv:1611.09326. Jégou, S., Drozdzal, M., Vázquez, D., Romero, A., & Bengio, Y. (2016). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. CoRR arXiv:​1611.​09326.
go back to reference Kendall, A., Badrinarayanan, V., & Cipolla, R. (2015). Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. CoRR arXiv:1511.02680. Kendall, A., Badrinarayanan, V., & Cipolla, R. (2015). Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. CoRR arXiv:​1511.​02680.
go back to reference Krähenbühl, P., & Koltun, V. (2012). Efficient inference in fully connected CRFS with Gaussian edge potentials. CoRR arXiv:1210.5644. Krähenbühl, P., & Koltun, V. (2012). Efficient inference in fully connected CRFS with Gaussian edge potentials. CoRR arXiv:​1210.​5644.
go back to reference Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 25 (pp. 1097–1105). Red Hook: Curran Associates, Inc. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 25 (pp. 1097–1105). Red Hook: Curran Associates, Inc.
go back to reference Li, R., Liu, W., Yang, L., Sun, S., Hu, W., Zhang, F., & Li, W. (2017). Deepunet: A deep fully convolutional network for pixel-level sea-land segmentation. CoRR arXiv:1709.00201. Li, R., Liu, W., Yang, L., Sun, S., Hu, W., Zhang, F., & Li, W. (2017). Deepunet: A deep fully convolutional network for pixel-level sea-land segmentation. CoRR arXiv:​1709.​00201.
go back to reference Lin, G., Milan, A., Shen, C., & Reid, I. D. (2016). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. CoRR arXiv:1611.06612. Lin, G., Milan, A., Shen, C., & Reid, I. D. (2016). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. CoRR arXiv:​1611.​06612.
go back to reference Pohlen, T., Hermans, A., Mathias, M., & Leibe, B. (2016). Full-resolution residual networks for semantic segmentation in street scenes. CoRR arXiv:1611.08323. Pohlen, T., Hermans, A., Mathias, M., & Leibe, B. (2016). Full-resolution residual networks for semantic segmentation in street scenes. CoRR arXiv:​1611.​08323.
go back to reference Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 91–99). Red Hook: Curran Associates, Inc. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 91–99). Red Hook: Curran Associates, Inc.
go back to reference Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. CoRR arXiv:1505.04597. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. CoRR arXiv:​1505.​04597.
go back to reference Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2014). Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2014). Imagenet large scale visual recognition challenge. CoRR arXiv:​1409.​0575.
go back to reference Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., et al. (2014). Going deeper with convolutions. CoRR arXiv:1409.4842. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., et al. (2014). Going deeper with convolutions. CoRR arXiv:​1409.​4842.
go back to reference Tai, L., Ye, Q., & Liu, M. (2016). PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI. CoRR arXiv:1610.01732. Tai, L., Ye, Q., & Liu, M. (2016). PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI. CoRR arXiv:​1610.​01732.
go back to reference Uhrig, J., Cordts, M., Franke, U., & Brox, T. (2016). Pixel-level encoding and depth layering for instance-level semantic labeling. CoRR arXiv:1604.05096. Uhrig, J., Cordts, M., Franke, U., & Brox, T. (2016). Pixel-level encoding and depth layering for instance-level semantic labeling. CoRR arXiv:​1604.​05096.
go back to reference Yosinski, J., Clune, J., Nguyen, A. M., Fuchs, T. J., & Lipson, H. (2015). Understanding neural networks through deep visualization. CoRR arXiv:1506.06579. Yosinski, J., Clune, J., Nguyen, A. M., Fuchs, T. J., & Lipson, H. (2015). Understanding neural networks through deep visualization. CoRR arXiv:​1506.​06579.
Metadata
Title
Fully convolutional networks for chip-wise defect detection employing photoluminescence images
Efficient quality control in LED manufacturing
Authors
Maike Lorena Stern
Martin Schellenberger
Publication date
31-03-2020
Publisher
Springer US
Published in
Journal of Intelligent Manufacturing / Issue 1/2021
Print ISSN: 0956-5515
Electronic ISSN: 1572-8145
DOI
https://doi.org/10.1007/s10845-020-01563-4

Other articles of this Issue 1/2021

Journal of Intelligent Manufacturing 1/2021 Go to the issue

Premium Partners