1 Introduction
2 State of the Art
2.1 Machine Vision Quality Gates
2.2 Synthetic Training Data
3 Synthetic Training Data Generation for Machine Vision Quality Gates
3.1 Synthetic Data Generation Pipeline for Classification
4 Validation
4.1 Smart Automation Laboratory
4.2 Validation Setting
Datasets | Training () = amount of synthetic data | Test | Validation |
---|---|---|---|
Real dataset | 2000 | 100 | 100 |
Synthetic dataset | 2000 (2000) | 100 | 100 |
Hybrid dataset | 2000 (100) | 100 | 100 |
4.3 Key Performance Indicators
Metric | Description | Formulas |
---|---|---|
Precision
| Calculated by dividing the number of true predictions by the number of true positives plus false positives |
\(\text{Precision}=\dfrac{\text{True Positives}}{\text{True Positives}+\text{False Positives}}\)
|
Recall
| Calculated dividing the number of true predictions by the number of true positives plus false negatives |
\(\text{Recall}=\dfrac{\text{True Positives}}{\text{True Positives}+\text{False Negatives}}\)
|
F1 Score
| Is the harmonic mean between precision and recall |
\(\text{F1}=2\cdot\dfrac{\text{precision}\cdot\text{recal}}{\text{precision}+\text{recall}}\)
|
4.4 Results & Discussion
Datasets | Precision macro avg. | Recall macro avg. | F1 Score macro avg. |
---|---|---|---|
Synthetic dataset
| |||
– DenseNet 201 | 0.87 | 0.86 | 0.86 |
– ResNet 152 v2 | 0.86 | 0.85 | 0.85 |
– Xception | 0.89 | 0.89 | 0.89 |
Real dataset
| |||
– DenseNet 201 | 1.00 | 1.00 | 1.00 |
– ResNet 152 v2 | 1.00 | 1.00 | 1.00 |
– Xception | 1.00 | 1.00 | 1.00 |
Hybrid dataset
| |||
– DenseNet 201 | 1.00 | 1.00 | 1.00 |
– ResNet 152 v2 | 0.99 | 0.99 | 0.99 |
– Xception | 0.99 | 0.99 | 0.99 |