1 Introduction
2 Wear monitoring in blanking processes
2.1 Blanking processes
2.2 Wear phenomena during blanking
2.3 Data driven methods for process monitoring
2.3.1 Conventional monitoring approaches
2.3.2 Machine learning approaches
2.3.3 Deep learning approaches
2.4 Convolutional neural networks
3 Methodology
3.1 Experimental setup
process parameters | |
---|---|
Stroke speed | 300 spm |
Stroke distance | 35 mm |
Clearance | 7.5 % |
material parameters | |
---|---|
Description | HC 260 LA (1.0480) |
Tensile strength | 365 \(\mathrm {N}\, {\mathrm{mm}^{-2}}\) |
Elongation \(\mathrm {A_{80}}\) | 27.10 % |
Sheet thickness | \((2.00\pm 0.1)\) mm |
3.2 Data generation
3.3 Deep learning modeling
3.3.1 Choice of the transfer learning model
Hyperparameter | MobileNet | Self-created CNN |
---|---|---|
Amount of FCLs | Not used (0) | 3 |
Neurons in FCLs | Not used | 512; 512; 512 |
Batch normalization | Not used | After every FCL |
Dropout | Not used | Not used |
L2-regularization | Not used | 0.01 |
Amount of CL | Not optimized | 7 |
Learning rate | 0.0001 | 0.001 |
Trained layers | Last 24 layers | All layers |
Filters in CL | Not optimized | \(2^5\); \(2^6\); \(2^6\); \(2^7\); \(2^8\); \(2^8\); \(2^8\) |
3.3.2 Hyperparameter optimization
Network parameter | MobileNet | Self-created CNN |
---|---|---|
Activation function | ReLU | ReLU |
Amount of layers | 88 | 31 |
Trainable parameters | 3,223,376 | 2,273,680 |
Output activation function | Softmax | Softmax |
Pooling operation used | Average | Max |
Amount of CL | 27 | 7 |
Amount of PL | 1 | 6 |
Input dimension of \(\mathbf{X} _\mathrm {in}\) | \(\mathbb {R}^{224 \times 224 \times 3}\) | \(\mathbb {R}^{128 \times 128 \times 3}\) |
Amount of output neurons | 16 | 16 |
Amount of FCL | 0 | 3 |
Optimization algorithm | Adam | Adam |
Loss function | SCCE | SCCE |
4 Results
Statistical parameter | MobileNet | Self-created CNN |
---|---|---|
Mean accuracy \(\mu \) | 98.94 % | 98.80 % |
Median accuracy | 98.93 % | 98.83 % |
Standard deviation \(\sigma \) | 0.24 % | 0.37 % |
95% confidence intervals | \((98.94 \pm 0.06) \%\) | \((98.80 \pm 0.10) \%\) |
Maximum accuracy | 99.41 % | 99.71 % |
Minimum accuracy | 98.44 % | 97.47 % |
Observations N per model | 100 |
Degrees of freedom | 169 |
\(t_\mathrm {stat}\) | 3.13 |
\(t_\mathrm {crit}\) one-tailed | 2.35 |
\(p(t_\mathrm {stat}\le t_\mathrm {crit})\) | 0.001 |
\(r_i\) [\(\mathrm {mm}\)] | 0 | .1 | .15 | .2 | .25 | .3 | .35 | .4 | .45 | .5 | .55 | .6 | .65 | .7 | .75 | .8 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 100 | – | - | – | – | – | – | – | – | – | – | – | – | – | – | – |
0.10 | – | 98.47 | .39 | – | 1.11 | – | .03 | – | – | – | – | – | – | – | – | – |
0.15 | – | – | 100 | – | – | – | – | – | – | – | – | – | – | – | – | – |
0.20 | – | – | .02 | 99.98 | – | – | – | – | – | – | – | – | – | – | – | – |
0.25 | – | – | .05 | – | 99.95 | – | – | – | – | – | – | – | – | – | – | – |
0.30 | – | – | – | – | – | 100 | – | – | – | – | – | – | – | – | – | – |
0.35 | – | – | – | – | – | – | 100 | – | – | – | – | – | – | – | – | – |
0.40 | – | – | – | – | – | – | – | 99.97 | – | .03 | – | – | – | – | – | – |
0.45 | – | – | – | – | – | – | – | – | 99.73 | – | .27 | – | – | – | – | – |
0.50 | – | – | – | – | – | – | – | – | – | 100 | – | – | – | – | – | – |
0.55 | – | – | – | – | – | – | – | – | .02 | – | 98 | 1.98 | – | – | – | – |
0.60 | – | – | – | – | – | – | – | – | – | – | .52 | 96.86 | .22 | 2.41 | – | – |
0.65 | – | – | – | – | – | – | – | – | – | – | .86 | .81 | 98.23 | .08 | .02 | – |
0.70 | – | – | – | – | – | – | – | – | – | – | – | 1.03 | 2.05 | 94.67 | 1.89 | .36 |
0.75 | – | – | – | – | – | – | – | – | – | – | – | – | – | .03 | 99.97 | – |
0.80 | – | – | – | – | – | – | – | – | – | – | – | – | – | – | 2.78 | 97.22 |