Introduction
Transfer learning background
TL in manufacturing
S/no. | Source | Real dataset | Time series | Problem type | Problem area |
---|---|---|---|---|---|
1. |
Sun et al. (2019) | Y | Y | Regression | RUL |
2. |
Ferguson et al. (2018) | N | N | Classification | Defect detection |
3. |
Huang et al. (2019) | Y | Y | Regression | Production monitoring |
4. |
Jiao et al. (2020) | N | N | Classification | Quality control |
5. |
Mao et al. (2019) | N | N | Regression | RUL |
6. |
Wen et al. (2019) | N | N | Classification | Fault diagnosis |
7. |
Cao et al. (2020) | N | Y | Classification | Fault diagnosis |
8. |
Zhang et al. (2019) | Y | Y | Classification | Fault diagnosis |
9. |
Wen et al. (2017) | N | Y | Classification | Fault diagnosis |
10. |
Wen et al. (2019) | N | Y | Classification | Fault diagnosis |
11. |
Xiao et al. (2019) | Y | N | Classification | Fault diagnosis |
12. |
Xu et al. (2019) | Y | N | Classification | Fault diagnosis |
13. |
Yang et al. (2019) | N | N | Classification | Fault diagnosis |
14. |
Zellinger et al. (2020) | Y | Y | Regression | Cyclical manufacturing |
15. |
Tercan et al. (2018) | N | N | Regression | Injection moulding |
16. |
Wang et al. (2021) | Y | Y | Regression | Machining |
17. |
Zhao et al. (2020) | Y | Y | Regression | Fault diagnosis |
18. |
Li et al. (2020) | N | Y | Regression | Fault diagnosis |
Problem statement and methodology
\(B_1\) | \(B_2\) | \(B_3\) | \(B_4\) | \(B_5\) | \(B_6\) | \(B_7\) | \(B_8\) | \(B_9\) | |
---|---|---|---|---|---|---|---|---|---|
Low | 112,957 | 275,340 | 279,979 | 305,931 | 270,954 | 319,020 | 312,035 | 303,802 | 271,000 |
Medium | 68,150 | 206,924 | 202,268 | 193,578 | 216,474 | 186,618 | 193,846 | 184,767 | 219,203 |
High | 61,536 | 43,337 | 43,353 | 26,092 | 38,172 | 19,962 | 19,720 | 37,031 | 35,397 |
Total | 242,364 | 525,601 | 525,600 | 525,601 | 525,600 | 525,600 | 525,601 | 525,600 | 525,600 |
Description of TL terminologies
Problem formulation
Dataset
Baseline network architecture
Transfer learning experiments
Weight reuse
Fine tuning
Results and discussion
Baseline models
Strategy | Number of re-trainable layers | Layers retrained |
---|---|---|
Fine Tune-1 | 1 | Classification layer |
Fine Tune-2 | 2 | Dense and Classification layers |
Fine Tune-3 | 3 | 1xLSTM, Dense and Classification layers |
Fine Tune-6 | 6 | 2xLSTM, Dense and Classification layers |
Reuse | All | All layers |
Target | Data percentage | F-score | Time (s) | TL type |
---|---|---|---|---|
\(B_1\) | 0.1 | 0.92938 | 26.1797 | fine-tune1 |
\(B_1\) | 0.3 | 0.9314 | 98.1249 | reuse |
\(B_1\) | 0.5 | 0.92931 | 134.6926 | reuse |
\(B_1\) | 1 | 0.92564 | 66.3608 | fine-tune2 |
\(B_2\) | 0.1 | 0.91126 | 56.0303 | fine-tune6 |
\(B_2\) | 0.3 | 0.91138 | 38.6265 | fine-tune3 |
\(B_2\) | 0.5 | 0.91111 | 51.1336 | fine-tune3 |
\(B_2\) | 1 | 0.91276 | 125.1308 | fine-tune6 |
\(B_3\) | 0.1 | 0.92165 | 54.127 | fine-tune6 |
\(B_3\) | 0.3 | 0.92319 | 37.6584 | fine-tune3 |
\(B_3\) | 0.5 | 0.92422 | 82.6167 | fine-tune6 |
\(B_3\) | 1 | 0.9235 | 1870.6878 | base |
\(B_4\) | 0.1 | 0.90383 | 53.9838 | fine-tune6 |
\(B_4\) | 0.3 | 0.90363 | 37.3664 | fine-tune3 |
\(B_4\) | 0.5 | 0.90444 | 80.2648 | fine-tune6 |
\(B_4\) | 1 | 0.9052 | 1834.4249 | base |
\(B_5\) | 0.1 | 0.80879 | 53.4503 | fine-tune6 |
\(B_5\) | 0.3 | 0.94181 | 65.7033 | fine-tune6 |
\(B_5\) | 0.5 | 0.94226 | 79.6871 | fine-tune6 |
\(B_5\) | 1 | 0.9394 | 1027.9393 | base1 |
\(B_6\) | 0.1 | 0.91702 | 56.6719 | fine-tune6 |
\(B_6\) | 0.3 | 0.91971 | 35.6487 | fine-tune3 |
\(B_6\) | 0.5 | 0.91963 | 78.979 | fine-tune6 |
\(B_6\) | 1 | 0.91945 | 122.0456 | fine-tune6 |
\(B_7\) | 0.1 | 0.91553 | 55.1323 | fine-tune6 |
\(B_7\) | 0.3 | 0.91593 | 66.0485 | fine-tune6 |
\(B_7\) | 0.5 | 0.91552 | 253.2316 | reuse |
\(B_7\) | 1 | 0.9168 | 1670.3705 | base |
\(B_8\) | 0.1 | 0.88976 | 55.081 | fine-tune6 |
\(B_8\) | 0.3 | 0.89157 | 34.3694 | fine-tune3 |
\(B_8\) | 0.5 | 0.89061 | 48.8034 | fine-tune3 |
\(B_8\) | 1 | 0.89079 | 119.8778 | fine-tune6 |