1 Introduction
2 Background
2.1 Dynamic time warping
-
\((e_1,f_1)=(1,1)\)
-
\((e_s,f_s)=(m,m)\)
-
\(0 \le e_{i+1} - e_i \le 1\) for all \(i<m\)
-
\(0 \le f_{i+1} - f_i \le 1\) for all \(i<m\)
2.1.1 Independent warping (\(DTW_I\))
2.1.2 Dependent warping (\(DTW_D\))
2.1.3 Adaptive warping (\(DTW_A\))
2.2 Ensembles of univariate classifiers
2.3 Generalized random shapelet forest (gRFS)
2.4 WEASEL+MUSE
2.5 Canonical interval forest (CIF)
2.6 The random convolutional kernel transform (ROCKET)
sktime
repository.2 For multivariate datasets, kernels are randomly assigned dimensions. Weights are then generated for each channel. Convolution in this case can be interpreted as the dot product between two matrices as the kernel convolves ‘horizontally’ across the series. The max value and ppv is then calculated across all dimensions for each kernel, producing a 20,000 attribute instance.2.7 The multiple representation sequence learner (MrSEQL)
sktime
version of MrSEQL classifier is implemented in such a way as it is capable of processing multivariate data. During the prepossessing of data, dimensions are processed sequentially and appended to one another creating n instances, each one of size \(m \times c\). The implementation used in this work can be found in sktime.32.8 Deep learning
2.8.1 Residual network (ResNet)
sktime-dl
is an interfacing of the implementation provided by that study.2.8.2 InceptionTime
sktime-dl
is an interfacing of the implementation provided by that study.2.8.3 Time series attentional prototype network (TapNet)
3 The UEA multivariate time series classification archive
Code | Name | Train size | Test size | Dims | Length | Classes |
---|---|---|---|---|---|---|
AWR | ArticularyWordRecognition | 275 | 300 | 9 | 144 | 25 |
AF | AtrialFibrillation | 15 | 15 | 2 | 640 | 3 |
BM | BasicMotions | 40 | 40 | 6 | 100 | 4 |
CR | Cricket | 108 | 72 | 6 | 1197 | 12 |
DDG | DuckDuckGeese | 50 | 50 | 1345 | 270 | 5 |
EW | EigenWorms | 128 | 131 | 6 | 17,984 | 5 |
EP | Epilepsy | 137 | 138 | 3 | 206 | 4 |
EC | EthanolConcentration | 261 | 263 | 3 | 1751 | 4 |
ER | ERing | 30 | 270 | 4 | 65 | 6 |
FD | FaceDetection | 5890 | 3524 | 144 | 62 | 2 |
FM | FingerMovements | 316 | 100 | 28 | 50 | 2 |
HMD | HandMovementDirection | 160 | 74 | 10 | 400 | 4 |
HW | Handwriting | 150 | 850 | 3 | 152 | 26 |
HB | Heartbeat | 204 | 205 | 61 | 405 | 2 |
LIB | Libras | 180 | 180 | 2 | 45 | 15 |
LSST | LSST | 2459 | 2466 | 6 | 36 | 14 |
MI | MotorImagery | 278 | 100 | 64 | 3000 | 2 |
NATO | NATOPS | 180 | 180 | 24 | 51 | 6 |
PD | PenDigits | 7494 | 3498 | 2 | 8 | 10 |
PEMS | PEMS-SF | 267 | 173 | 963 | 144 | 7 |
PS | PhonemeSpectra | 3315 | 3353 | 11 | 217 | 39 |
RS | RacketSports | 151 | 152 | 6 | 30 | 4 |
SRS1 | SelfRegulationSCP1 | 268 | 293 | 6 | 896 | 2 |
SRS2 | SelfRegulationSCP2 | 200 | 180 | 7 | 1152 | 2 |
SWJ | StandWalkJump | 12 | 15 | 4 | 2500 | 3 |
UW | UWaveGestureLibrary | 120 | 320 | 3 | 315 | 8 |
3.1 Electrical biosignals
3.1.1 AtrialFibrilation (Goldberger et al. 2000)
3.1.2 FaceDetection
3.1.3 FingerMovements (Blankertz et al. 2002)
3.1.4 HandMovementDirection
3.1.5 MotorImagery (Lal et al. 2005)
3.1.6 SelfRegulationSCP1 (Birbaumer et al. 1999)
3.1.7 SelfRegulationSCP2 (Birbaumer et al. 1999)
3.1.8 StandWalkJump (Goldberger et al. 2000)
3.2 Accelerometer/gyroscope
3.2.1 BasicMotions
3.2.2 Cricket (Ko et al. 2005)
3.2.3 Epilepsy (Villar et al. 2016)
3.2.4 Handwriting (Shokoohi-Yekta et al. 2017)
3.2.5 NATOPS (Ghouaiel et al. 2017)
3.2.6 RacketSports
3.2.7 UWaveGestureLibrary (Liu et al. 2009)
3.3 Coordinates
3.3.1 ArticularyWordRecognition (Wang et al. 2013)
3.3.2 LIBRAS (Dias and Peres 2016)
3.3.3 PenDigits (Alimoğlu and Alpaydin 2001)
3.4 Audio
3.4.1 DuckDuckGeese
3.4.2 Heartbeat (Goldberger et al. 2000)
3.4.3 Phoneme (Hamooni and Mueen 2014)
3.5 Other datasets
3.5.1 ERing (Wilhelm et al. 2015)
3.5.2 EthanolConcentration (Large et al. 2018)
3.5.3 LSST
3.5.4 PEMS-SF (Cuturi 2011)
4 Methods
4.1 Toolkits
sktime
14 is an open source, Python based, sklearn compatible toolkit for time series analysis. sktime
is designed to provide a unifying API for a range of time series tasks such as annotation, prediction and forecasting. See Löning et al. (2019) for a description of the overarching design of sktime
and Bagnall et al. (2019) for an experimental comparison of some of the classification algorithms available. The Java toolkit for time series machine learning, tsml
,15 is Weka compatible and is the descendent of the codebase used to perform univariate TSC benchmarking (Bagnall et al. 2017). The two toolkits will eventually converge to include all classifiers described. To reduce the number of dependencies in the core package, sktime
has subpackages for specific forms of classification. sktime-dl
provides a range of deep learning approaches to time series classification and sktime-shapelets-forest
gives shapelet functionality.16 The mechanism for running an experiment for a combination of classifier, problem and resample (‘single evaluation’, henceforth) are the same in both toolkits. Available classifiers are given in ClassifierLists.java
and classifier_lists.py
. Usage with tsml
Experiments.java
is shown in code listing 1. The equivalent with sktime
class experiments.py
is shown in listing 2.
generateTrainFiles
is set to true, an external cross validation (or internal performance estimation mechanism if available) on the train data for that resample will be used to generate a train results file. The dimension independent ensembles can be built in both toolkits using a dimension ensemble. In tsml this is can be done with, for example, the RISE classifier as follows:
ColumnEnsembleClassifier
, and can be configured thus: It is possible to build HIVE-COTE from DimensionIndependentEnsemble
elements, but computational resources can likely be better utilised if each component is built independently (with generate train files set to true) and then ensembled with HIVE-COTE later from the results files. Table 2 lists the algorithms and their availability in the toolkits. Where a classifier is available in both toolkits, we run experiments in tsml
, because it is generally faster. TapNet is the only algorithm not yet ported to a toolkit. We are working to include it in sktime-dl
.Algorithm | tsml | sktime | sktime-dl | sktime-shapelets |
---|---|---|---|---|
DTW_D | X | X | ||
DTW_I | X | X | ||
DTW_A | X | |||
MUSE | X | |||
gRFS | X | |||
MrSEQL | X | |||
ROCKET | X | |||
CIF | X | X | ||
TapNet | ||||
ResNet | X | |||
InceptionTime | X | |||
CBOSS | X | X | ||
STC | X | X | X | |
RISE | X | X | ||
TSF | X | X | ||
HIVE-COTE | X | X |
4.2 Evaluation and comparison of classifiers
tsml
. This code collates all the results, summarises a large range of performance metrics (accuracy, AUC, F1 etc), conducts statistical tests to compare classifiers and draws comparative diagrams such as scatter plots and critical difference diagrams. The results collated by MultipleClassifierEvaluation
(MCE) (see Listing 4) include performance metrics (accuracy, area under the ROC, balanced accuracy, F1, negative log likelihood, Matthew’s correlation coefficient, recall/sensitivity, precision and specificity). When stored in the problem files, it also collates memory usage and run time. By default, MCE compares pairs of classifiers using the Wilcoxon sign rank test, and presents the relative results as scatter plots and critical difference diagrams generated in Matlab. These graphs are explained in more detail when first used. In Sect. 5 we present a selection of the results generated by MCE for brevity. However, all these results are available on the associated website. Our main focus is on accuracy due to its ease of motivation and interpretation on arbitrary datasets, but we also present the area under the receiver operator curve (AUROC), balanced accuracy and F1 statistics. For pairwise comparison of two classifiers, by default we follow the standard machine learning approach of using the non parametric Wilcoxon sign rank test. For some tests, we have also performed a paired t-test for contrast. To compare multiple classifiers on multiple data sets, we adapt the approach from Demšar (2006) and use critical difference diagrams. These order classifiers by rank, and group classifiers together into cliques, sets of classifiers between which there is no significant difference. Based on the literature (Benavoli et al. 2016), we abandon the post hoc test used in Demšar (2006) and instead form cliques with pairwise tests, making the Holm correction for multiple testing. For the majority of diagrams we use the Wilcoxon sign rank test for pairwise comparison. However, for completeness and as a basic sanity test, we also show the results of paired t-tests for the most important results.
4.3 Experiments
tsml
, sktime
and sktime-shapelets-forest
were conducted on the UEA high performance computing (HPC) cluster. The nature of the HPC means that any one job (a single evaluation) runs on a single core and has a maximum execution time of 7 days. For memory intensive algorithms, we reran with increasing memory until successful completion, up to a maximum of 500 GB.sktime-dl
and TapNet were performed on GPUs in desktops, one with a Titan XP and one with four Titan X Pascals. All jobs were run on a single GPU, and each GPU ran only one job at a time. There was no time limit for these jobs. However, the jobs were limited by the GPU memory of 12 GB per card.Algorithm | Configuration |
---|---|
DTW_D | Full warping window |
DTW_I | Full warping window |
DTW_A | Full warping window |
MUSE | \(\chi =2\) |
gRFS | Default (max depth: none, min sample split: 2, num shapelets: 10 |
Min size: 0%, max size: 100%, metric: Euclidean distance) | |
MrSEQL | seql_mode: fs, symrep: [’sax’, ’sfa’] |
ROCKET | Ridge regression classifier, 10,000 kernels |
CIF | Default (trees: 500, intervals: \(\sqrt{(}m) \times \sqrt{(}d)\), 8 attributes per tree) |
TapNet | Default (Epochs: 3000, Learning rate: \(1e{-}5\), weight decay: \(1e{-}3\) |
Stop threshold: \(1e{-}9\), num filters: [256 256 128], kernels: [8 5 3] | |
Dilation: 1, dropout: 0%) | |
ArticularyWordRecognition (dilation: 10) | |
EthanolConcentration (dilation: 200, learning rate: \(1e{-}6\)) | |
FaceDetection (filters: [64 64 32], learning rate: \(5e{-}5\)) | |
Heartbeat (dilation: 200, learning rate: \(1e{-}6\), filters: [64 64 32]) | |
PenDigits (kernels: [4 1 1], learning rate: \(1e{-}3\)) | |
PhonemeSpectra (filters: [64 64 32], learning rate: \(1e{-}3\)) | |
SelfRegulationsCP1 (learning rate: \(1e{-}6\)) | |
SelfRegulationsCP2 (learning rate: \(1e{-}9\)) | |
SpokenArabicDigits (filters: [128 128 64], learning rate: \(1e{-}4\)) | |
ResNet | Epochs: 1500, batch size: 16, learning rate: \(1e{-}3\) and halved after |
No improvement for 50 epochs | |
Three residual blocks each with three conv layers with kernel sizes [8, 5, 3] | |
Filters per conv layer for each block [64, 128, 128] | |
InceptionTime | Epochs: 1500, batch size: 64, learning rate: \(1e-3\) and halved after no |
Improvement for 50 epochs | |
Two residual blocks each with three Inception modules with kernel sizes | |
Per module [10, 20, 40] | |
Plus bottleneck filters for all conv layers 32 | |
CBOSS | Default, see Bagnall et al. (2020) |
STC | Default, see Bagnall et al. (2020) |
RISE | Default, see Bagnall et al. (2020) |
TSF | Default, see Bagnall et al. (2020) |
HIVE-COTE | Version 1.0 Bagnall et al. (2020) |
\(\hbox {DTW}_A\) | \(\hbox {DTW}_D\) | \(\hbox {nDTW}_A\) | \(\hbox {nDTW}_D\) | \(\hbox {nDTW}_I\) | \(\hbox {DTW}_I\) | |
---|---|---|---|---|---|---|
\(\hbox {DTW}_A\) | True | True | True | False | False | False |
\(\hbox {DTW}_D\) | True | True | True | True | False | False |
\(\hbox {nDTW}_A\) | True | True | True | False | True | True |
\(\hbox {nDTW}_D\) | False | True | False | True | True | True |
\(\hbox {nDTW}_I\) | False | False | True | True | True | True |
\(\hbox {DTW}_I\) | False | False | True | True | True | True |
5 Results
5.1 Comparison of eleven classifiers on twenty six datasets
RCKT | HC | CIF | ResNet | STC | \(\hbox {DTW}_D\) | gRSF | TSF | CBOSS | RISE | \(\hbox {DTW}_I\) | |
---|---|---|---|---|---|---|---|---|---|---|---|
RCKT | 0.0000 | 0.1742 | 0.5506 | 0.0619 | 0.0283 | 0.0004 | 0.0004 | 0.0039 | 0.0006 | 0.0024 | 0.0000 |
HC | 0.9128 | 0.0000 | 0.5338 | 0.1285 | 0.0585 | 0.0092 | 0.0023 | 0.0004 | 0.0000 | 0.0003 | 0.0000 |
CIF | 0.6660 | 0.6402 | 0.0000 | 0.2692 | 0.0520 | 0.0043 | 0.0009 | 0.0006 | 0.0017 | 0.0001 | 0.0001 |
ResNet | 0.0759 | 0.1246 | 0.1184 | 0.0000 | 0.9899 | 0.3282 | 0.5812 | 0.5506 | 0.2277 | 0.1500 | 0.0092 |
STC | 0.4282 | 0.0621 | 0.0580 | 0.4512 | 0.0000 | 0.1068 | 0.1513 | 0.0578 | 0.0074 | 0.0020 | 0.0004 |
\(\hbox {DTW}_D\) | 0.0005 | 0.0159 | 0.0166 | 0.2989 | 0.1630 | 0.0000 | 0.4091 | 0.8689 | 0.7509 | 0.6204 | 0.0022 |
gRSF | 0.0003 | 0.0168 | 0.0041 | 0.7645 | 0.1532 | 0.5509 | 0.0000 | 0.5338 | 0.5338 | 0.1218 | 0.0010 |
TSF | 0.0039 | 0.0044 | 0.0006 | 0.6544 | 0.0837 | 0.7847 | 0.7881 | 0.0000 | 0.6938 | 0.5338 | 0.0230 |
CBOSS | 0.0008 | 0.0022 | 0.0037 | 0.3621 | 0.0669 | 0.9258 | 0.5020 | 0.7520 | 0.0000 | 0.3949 | 0.0054 |
RISE | 0.0024 | 0.0008 | 0.0000 | 0.3258 | 0.0044 | 0.6638 | 0.1534 | 0.3549 | 0.4838 | 0.0000 | 0.1307 |
\(\hbox {DTW}_I\) | 0.0000 | 0.0001 | 0.0001 | 0.0036 | 0.0009 | 0.0074 | 0.0028 | 0.0096 | 0.0042 | 0.1013 | 0.0000 |
\(\hbox {DTW}_D\) (%) | ROCKET (%) | CIF (%) | HIVE-COTE (%) | |
---|---|---|---|---|
AWR | \(98.87 {\pm }0.05 \) | \(\mathbf{99.56 }{\pm }{} \mathbf{0.13 }\) | \(97.89 {\pm }0.15 \) | \(97.99 {\pm }0.10 \) |
AF | \(23.56 {\pm }1.39 \) | \(24.89 {\pm }1.68 \) | \(25.11 {\pm }2.18 \) | \(\mathbf{29.33 }{\pm }{} \mathbf{1.31 }\) |
BM | \(95.25 {\pm }0.23 \) | \(99.00 {\pm }0.00 \) | \(99.75 {\pm }0.14 \) | \(\mathbf{100.0 }{\pm }{} \mathbf{0.84 }\) |
CR | \(\mathbf{100.0 }{\pm }{} \mathbf{0.00 }\) | \(\mathbf{100.0 }{\pm }{} \mathbf{0.13} \) | \(98.38 {\pm }0.29 \) | \(99.26 {\pm }0.00 \) |
DDG | \(49.20 {\pm }0.99 \) | \(46.13 {\pm }1.04 \) | \(\mathbf{56.00 }{\pm }{} \mathbf{1.03 }\) | \(47.60 {\pm }1.20 \) |
EW | \(64.58 {\pm }0.53 \) | \(86.28 {\pm }1.21 \) | \(\mathbf{90.33 }{\pm }{} \mathbf{0.54 }\) | \(78.17 {\pm }0.62 \) |
EP | \(96.30 {\pm }0.17 \) | \(99.08 {\pm }0.00 \) | \(98.38 {\pm }0.27 \) | \(\mathbf{100.0 }{\pm }{} \mathbf{0.26 }\) |
EC | \(30.15 {\pm }0.54 \) | \(44.68 {\pm }0.43 \) | \(72.89 {\pm }0.56 \) | \(\mathbf{80.68 }{\pm }{} \mathbf{0.50 }\) |
ER | \(92.91 {\pm }0.12 \) | \(\mathbf {98.05 }{\pm }{} \mathbf{0.49 }\) | \(95.65 {\pm }0.42 \) | \(94.26 {\pm }0.40 \) |
FD | \(53.28 {\pm }0.23 \) | \(\mathbf {69.42 }{\pm }{} \mathbf{0.30 }\) | \(68.89 {\pm }0.27 \) | \(69.17 {\pm }0.14 \) |
FM | \(54.17 {\pm }0.90 \) | \(\mathbf {55.27 }{\pm }{} \mathbf{0.84 }\) | \(53.90 {\pm }0.81 \) | \(53.77 {\pm }0.93 \) |
HMD | \(30.32 {\pm }1.00 \) | \(44.59 {\pm }0.87 \) | \(\mathbf{52.21 }{\pm }{} \mathbf{1.08 }\) | \(37.79 {\pm }0.81 \) |
HW | \(\mathbf{61.21 }{\pm }{} \mathbf{0.42 }\) | \(56.67 {\pm }0.42 \) | \(35.13 {\pm }0.40 \) | \(50.41 {\pm }0.42 \) |
HB | \(68.88 {\pm }0.37 \) | \(71.76 {\pm }0.02 \) | \(\mathbf{76.52 }{\pm }{} \mathbf{0.30 }\) | \(72.18 {\pm }0.52 \) |
LIB | \(88.04 {\pm }0.44 \) | \(90.61 {\pm }0.45 \) | \(\mathbf{91.67 }{\pm }{} \mathbf{0.49 }\) | \( 90.28 {\pm }0.61 \) |
LSST | \(54.76 {\pm }0.08 \) | \(\mathbf{63.15 }{\pm }{} \mathbf{0.16 }\) | \( 56.17 {\pm }0.22 \) | \(53.84 {\pm }0.14 \) |
MI | \(52.10 {\pm }0.73 \) | \(\mathbf{53.13 }{\pm }{} \mathbf{0.78 }\) | \( 51.80 {\pm }1.03 \) | \(52.17 {\pm }0.74 \) |
NATO | \(82.04 {\pm }0.32 \) | \(\mathbf{88.54 }{\pm }{} \mathbf{0.44 }\) | \( 84.41 {\pm }0.32 \) | \(82.85 {\pm }0.32 \) |
PD | \(99.28 {\pm }0.05 \) | \(\mathbf{99.56 }{\pm }{} \mathbf{0.14 }\) | \( 98.97 {\pm }0.08 \) | \(97.19 {\pm }0.06 \) |
PEMS | \(77.05 {\pm }0.58 \) | \(85.63 {\pm }0.38 \) | \(\mathbf{99.85 }{\pm }{} \mathbf{0.09 }\) | \( 97.98 {\pm }0.59 \) |
PS | \(15.39 {\pm }0.10 \) | \(28.35 {\pm }0.12 \) | \(26.56 {\pm }0.13 \) | \(\mathbf{32.87 }{\pm }{} \mathbf{0.07 }\) |
RS | \(85.64 {\pm }0.26 \) | \(\mathbf{92.79 }{\pm }{} \mathbf{0.45 }\) | \( 89.30 {\pm }0.51 \) | \(90.64 {\pm }0.37 \) |
SRS1 | \(81.81 {\pm }0.35 \) | \(\mathbf{86.55 }{\pm }{} \mathbf{0.31 }\) | \( 85.94 {\pm }0.28 \) | \(86.02 {\pm }0.32 \) |
SRS2 | \(\mathbf{53.69 }{\pm }{} \mathbf{0.49 }\) | \(51.35 {\pm }0.59 \) | \(48.87 {\pm }0.56 \) | \(51.67 {\pm }0.67 \) |
SWJ | \(22.00 {\pm }1.87 \) | \(\mathbf{45.56 }{\pm }{} \mathbf{2.72 }\) | \(45.11 {\pm }2.65 \) | \(40.67 {\pm }1.54 \) |
UW | \(92.28 {\pm }0.21 \) | \(\mathbf{94.43 }{\pm }{} \mathbf{0.35 }\) | \(92.42 {\pm }0.32 \) | \(91.31 {\pm }0.23 \) |
Classifier | Total time (h) | Difference in accuracy to \(\hbox {DTW}_D\) (%) |
---|---|---|
ROCKET | 1.26 | 5.86 |
gRSF | 9.27 | 1.0 |
ResNet | 13.38 | 1.72 |
CIF | 148.55 | 6.51 |
CBOSS | 181.60 | 0.13 |
TSF | 263.88 | 0.59 |
RISE | 279.64 | \(-\) 1.11 |
STC | 7019.69 | 4.06 |
HIVE-COTE | 12,172.44 | 5.98 |
tsml
, but this capability is not yet in sktime
and its variants. Table 8 shows the maximum and total memory usage of eight tsml
classifiers. HIVE-COTE is the most memory intensive classifier, but even HIVE-COTE required at most 3.5 GB (MotorImagery) and just 21 GB for all problems. Memory is not a significant constraint for these classifiers.tsml
classifiersClassifier | Max memory | Total memory |
---|---|---|
\(\hbox {DTW}_I\) | 1883 | 5587 |
\(\hbox {DTW}_D\) | 1845 | 5952 |
RISE | 2624 | 10,242 |
TSF | 2670 | 10,632 |
CBOSS | 2675 | 10,537 |
STC | 2163 | 9778 |
CIF | 2954 | 15,900 |
HIVE-COTE | 3577 | 21,217 |
5.2 Comparison of sixteen classifiers on twenty datasets
Algorithm | Completed data | P-value | W/D/L |
---|---|---|---|
MUSE | 20 | 0.1005 | 13/0/7 |
TapNet | 23 | 0.9015 | 10/0/13 |
MrSEQL | 24 | 0.0593 | 16/0/8 |
\(\hbox {DTW}_A\) | 25 | 0.6900 | 10/2/13 |
InceptionTime | 25 | 0.0149 | 17/0/8 |
STC | 26 | 0.1067 | 15/0/11 |
HIVE-COTE | 26 | 0.0043 | 17/0/9 |
CIF | 26 | 0.0092 | 19/0/7 |
ROCKET | 26 | 0.0004 | 22/1/3 |
5.3 Analysis by problem
Problem | Classes | Default (%) | \(\hbox {DTW}_D\) (%) | Best (%) | Algorithm(s) |
---|---|---|---|---|---|
AWR | 25 | 4.00 | 98.87 | 99.56 | ROCKET |
AF | 3 | 33.3 | 23.56 | 74.00 | MUSE |
BM | 4 | 25.0 | 95.25 | 100.0 | IT, MUSE, HC, ResNet, gRSF, RISE |
CR | 12 | 8.33 | 100.0 | 100.0 | ROCKET, \(\hbox {DTW}_A\), \(\hbox {DTW}_D\) |
DDG | 5 | 20.0 | 49.20 | 63.47 | IT |
EW | 5 | 42.0 | 64.58 | 90.33 | CIF |
EP | 4 | 26.8 | 96.30 | 100.0 | HC |
EC | 4 | 25.1 | 30.15 | 82.36 | STC |
ER | 6 | 16.7 | 92.91 | 98.05 | ROCKET |
FD | 2 | 50.0 | 53.28 | 77.24 | IT |
FM | 2 | 50.0 | 54.17 | 56.13 | IT |
HMD | 4 | 18.9 | 30.32 | 52.21 | CIF |
HW | 26 | 3.8 | 61.21 | 65.74 | IT |
HB | 2 | 72.2 | 68.88 | 76.52 | CIF |
LIB | 15 | 6.7 | 88.04 | 94.11 | ResNet |
LSST | 14 | 31.5 | 54.76 | 63.62 | MUSE |
MI | 2 | 50.0 | 52.10 | 53.80 | TSF |
NATO | 6 | 16.7 | 82.04 | 97.11 | ResNet |
PD | 10 | 10.4 | 99.28 | 99.68 | IT |
PEMS | 7 | 11.6 | 77.05 | 99.85 | CIF |
PS | 39 | 2.6 | 15.39 | 36.74 | IT |
RS | 4 | 28.3 | 85.64 | 92.79 | ROCKET |
SRS1 | 2 | 50.2 | 81.81 | 95.68 | TapNet |
SRS2 | 2 | 50.0 | 53.69 | 53.69 | \(\hbox {DTW}_D\) |
SWJ | 3 | 33.3 | 22.00 | 45.56 | ROCKET |
UW | 8 | 12.5 | 92.28 | 94.43 | ROCKET |
5.4 Explanatory analysis case studies
5.4.1 Ethanol concentration
5.4.2 PEMS-SF
6 Conclusions
sktime
toolkit with multivariate functionality, so we could include it in the study. This highlights the benefits of code sharing within a common framework.