Introduction
-
Our algorithm uses an ensemble learning method to combine multiple deep learning models to improve the performance of federated learning in non-IID datasets.
-
Our algorithm uses parallel computing in the training process, which can train multiple independent deep learning models at the same time to reduce training time.
Background
Federated learning
Non-IID
Ensemble learning
Algorithm
Federated learning
Basic federated Learning
Basic model training
Basic model prediction
Meta-federated learning
Prediction
Experiment
Preparation
Algorithm | Datasets | Parameters | Evaluation indicators |
---|---|---|---|
FedAvg [14] | MNIST, CIFAR-10 Shakespeare | Adjustable: epoch, batch size, learning rate, client fraction | Accuracy, loss |
FedProx [15] | MNIST, FEMNIST, Shakespeare, Sent140 | Adjustable: \(\mu \), stragglers Fixed: epoch, batch size, learning rate | Loss |
FedPD [20] | FEMNIST | Adjustable: R, p Fixed: epoch, batch size, learning rate | Gradient |
FedBN [21] | SVHN, USPS, SynthDigits, MNIST-M, MNIST | Adjustable: epoch Fixed: batch size, learning rate | Accuracy, loss |
Datasets
Division methods
Model | Fold-1\(^{{\text {a}}}\) (%) | Fold-2 (%) | Fold-3 (%) | Fold-4 (%) | Fold-5 (%) | Non-Fold\(^{{\text {b}}}\) (%) |
---|---|---|---|---|---|---|
ResNet | 76.86 | 71.20 | 70.48 | 73.00 | 69.96 | 75.13 |
VGG | 81.48 | 76.06 | 80.05 | 75.89 | 73.00 | 77.73 |
DenseNet | 70.00 | 61.73 | 67.91 | 65.39 | 69.61 | 76.67 |
Models
Parameters and evaluation indicators
Results
Detailed explanation
Model | Train/acc (%) | Test/acc (%) | Test/F1-score (%) |
---|---|---|---|
Ours | 78.06 | 86.12 | 85.96 |
ResNet | 72.69 | 75.13 | 74.17 |
VGG | 75.01 | 77.73 | 74.79 |
DenseNet | 73.13 | 76.67 | 75.73 |
Experiment of different training parameters
Batch\(^{{\text {a}}}\) | Epoch\(^{{\text {b}}}\) | Method | Model | Accuracy (%) |
---|---|---|---|---|
128 | 1 | FedAvg | ResNet | 85.56 |
VGG | 83.41 | |||
FedProx | ResNet | 85.92 | ||
VGG | 89.51 | |||
Ours | ResNet &VGG | 97.46 | ||
3 | FedAvg | ResNet | 81.96 | |
VGG | 68.99 | |||
FedProx | ResNet | 83.62 | ||
VGG | 88.42 | |||
Ours | ResNet &VGG | 96.88 | ||
256 | 1 | FedAvg | ResNet | 83.92 |
VGG | 83.06 | |||
FedProx | ResNet | 86.18 | ||
VGG | 95.06 | |||
Ours | ResNet &VGG | 97.51 | ||
3 | FedAvg | ResNet | 83.35 | |
VGG | 62.18 | |||
FedProx | ResNet | 89.17 | ||
VGG | 89.05 | |||
Ours | ResNet &VGG | 95.85 |
Batch\(^{{\text {a}}}\) | Epoch\(^{{\text {b}}}\) | Method | Model | Accuracy (0.5)\(^{{\text {c}}}\) (%) | Accuracy (1.0) (%) | Accuracy (5.0) (%) |
---|---|---|---|---|---|---|
128 | 1 | FedAvg | ResNet | 74.75 | 63.91 | 66.28 |
VGG | 81.38 | 67.96 | 73.09 | |||
FedProx | ResNet | 75.45 | 73.75 | 74.29 | ||
VGG | 84.46 | 74.32 | 77.62 | |||
Ours | ResNet &VGG | 85.36 | 74.74 | 79.17 | ||
3 | FedAvg | ResNet | 72.96 | 61.81 | 65.51 | |
VGG | 80.91 | 67.13 | 75.18 | |||
FedProx | ResNet | 76.24 | 73.41 | 75.36 | ||
VGG | 83.13 | 73.93 | 79.94 | |||
Ours | ResNet &VGG | 86.25 | 75.39 | 80.59 | ||
256 | 1 | FedAvg | ResNet | 72.97 | 60.11 | 65.67 |
VGG | 81.44 | 64.44 | 73.51 | |||
FedProx | ResNet | 73.91 | 74.36 | 74.23 | ||
VGG | 84.37 | 72.11 | 75.17 | |||
Ours | ResNet &VGG | 84.59 | 71.74 | 78.71 | ||
3 | FedAvg | ResNet | 72.23 | 59.12 | 65.78 | |
VGG | 81.55 | 67.55 | 75.93 | |||
FedProx | ResNet | 75.67 | 71.86 | 76.35 | ||
VGG | 76.17 | 73.25 | 75.01 | |||
Ours | ResNet &VGG | 86.55 | 75.87 | 78.43 |