1 Introduction
2 Background
3 Related Work
3.1 Aerial Imagery Pile Burn Detection Using Deep Learning: The FLAME Dataset
Dataset | Precision (%) | Recall (%) | AUC (%) | F1-Score (%) | Sensitivity (%) | Specificity (%) | IOU (%) |
---|---|---|---|---|---|---|---|
Image Segmentation | 91.99 | 83.88 | 99.85 | 87.75 | 83.12 | 99.96 | 78.17 |
Performance | ||
---|---|---|
Dataset | Loss | Accuracy (%) |
Test set | 0.7414 | 76.23 |
Validation set | 0.1506 | 94.31 |
Training set | 0.0857 | 96.79 |
3.2 Active Fire Detection Landsat-8 Imagery: A Large-Scale Dataset and a Deep-Learning Study
Mask | CNN Architecture | P | R | IoU | F | |
---|---|---|---|---|---|---|
Schroeder et al. | U-Net(10c) | 86.8 | 89.7 | 78.9 | 88.2 | |
U-Net(3c) | 89.8 | 88.8 | 80.7 | 89.3 | ||
U-Net-Light(3c) | 90.8 | 86.1 | 79.2 | 88.4 | ||
Murphy et al. | U-Net(10c) | 93.6 | 92.5 | 87.0 | 93.0 | |
U-Net(3c) | 89.1 | 97.6 | 87.2 | 93.2 | ||
U-Net-Light(3c) | 92.6 | 95.1 | 88.4 | 93.8 | ||
Kumar-Roy | U-Net(10c) | 84.6 | 94.1 | 80.3 | 89.1 | |
U-Net(3c) | 84.2 | 90.6 | 77.5 | 87.3 | ||
U-Net-Light(3c) | 76.8 | 93.2 | 72.7 | 84.2 | ||
Intersection | Schroeder et al. | U-Net(10c) | 84.4 | 99.7 | 84.2 | 91.4 |
Murphy et al. | U-Net(3c) | 93.4 | 92.4 | 86.7 | 92.9 | |
Kumar-Roy | U-Net-Light(3c) | 87.4 | 97.3 | 85.4 | 92.1 | |
Voting | Schroeder et al. | U-Net(10c) | 92.9 | 95.5 | 89.0 | 94.2 |
Murphy et al. | U-Net(3c) | 91.9 | 95.3 | 87.9 | 93.6 | |
Kumar-Roy | U-Net-Light(3c) | 90.2 | 96.5 | 87.3 | 93.2 |
3.3 Wildfire Segmentation Using Deep Vision Transformers
Model | Back bone | Input Resolution | Fi-Score (%) |
---|---|---|---|
TransUNet | Res50-ViT | 224*224 | 97.5 |
TransUNet | Res50-ViT | 512*512 | 97.7 |
TransUNet | ViT | 224*224 | 94.1 |
TransUNet | ViT | 512*512 | 94.8 |
MedT | simple CNN-Transformer | 224*224 | 95.5 |
MedT | simple CNN-Transformer | 256*256 | 96.0 |
3.4 Attention Based CNN Model for Fire Detection and Localization in Real-World Images
4 Goals
4.1 Research Questions
5 Method
5.1 Xception Architecture
5.2 Setup
5.3 Limitations and Validity Threats
5.4 Expected Results
5.5 Datasets
5.5.1 FLAME Dataset
5.5.2 Kaggle-“NASA”dataset
5.5.3 ”GitHub” Dataset
Dataset | Size/Class | Characteristic | |
---|---|---|---|
1 | FLAME | 15000 | aerial, scenery |
2 | NASA | 2000 | ground level, closer |
3 | GitHub | 1200 | ground level, versatile |
6 Results
6.1 Evaluation
-
True positive: are images predicted to contain fire which are in fact fire images i.e., a fire image which is correctly predicted.
-
True negative: are images predicted to contain no fire which are in fact no fire images i.e., an image which does not contain any fire is correctly predicted.
-
False positive: are images where the model predicted that the image contained fire, while in fact it did not contain any fire. That is, an incorrect fire prediction.
-
False negative: are images where the model predicted that the image contained no fire, however, there was fire present in the image. That is, an incorrect no fire prediction.
6.2 Model Performances Comparison
6.2.1 Training Model on FLAME Dataset
6.2.2 Training Model on NASA Dataset
6.2.3 Performance: FLAME Model with GitHub Dataset
6.2.4 Performance: FLAME Model with NASA Dataset
6.2.5 Performance: NASA Model with FLAME Dataset
6.2.6 Performance: NASA Model with GitHub Dataset
Model | Dataset | Accuracy | Loss | |
---|---|---|---|---|
1 | FLAME | FLAME | 99% | 0.02 |
2 | FLAME | NASA | 62% | 33 |
3 | FLAME | GitHub | 60% | 53.5 |
4 | NASA | NASA | 92% | 0.18 |
5 | NASA | FLAME | 29% | 0.90 |
6 | NASA | GitHub | 67% | 0.67 |
Model | Dataset | Precision | Recall | |
---|---|---|---|---|
1 | FLAME | NASA | 98% | 24% |
2 | FLAME | GitHub | 98% | 20% |
3 | NASA | FLAME | 20% | 32% |
4 | NASA | GitHub | 82% | 44% |