1 Introduction
1.1 Related works
Method | Advantages | Disadvantages |
---|---|---|
Manually chosen fuzzy sets with regression (Wiktorowicz and Krzeszowski 2020) | Very simple to use and implement | Generates less fitted models Regression may be ill-conditioned |
Repeatability of obtained models Low computational cost | The number of fuzzy sets must be defined | |
Simple to use and implement | Large search space | |
Difficulty in finding optimal parameters | ||
The number of fuzzy sets must be defined | ||
High computational cost | ||
Stochastic characteristic of the results | ||
The use of regression reduces the search space | Regression may be ill-conditioned | |
The number of fuzzy sets must be defined | ||
High computational cost | ||
Stochastic characteristic of the results | ||
The use of clustering gives the initial structure of a system | Clustering complicates the algorithm | |
High computational cost | ||
Stochastic characteristic of the results | ||
The use of clustering gives the initial structure of a system | Clustering complicates the algorithm | |
The use of regression reduces the search space | Regression may be ill-conditioned | |
Stochastic characteristic of the results | ||
Our approach | The use of sparse regression simplifies the model | The number of fuzzy sets must be defined |
Using the ridge and sparse regressions prevents occurrence of ill-conditioned problem | Stochastic characteristic of the results | |
The use of regression reduces the search space | High computational cost | |
The use of high-order system provides greater flexibility in system design | Applying the high-order fuzzy system increases the number of parameters in the consequent part |
1.2 Contributions
-
The definition of high-order Takagi–Sugeno fuzzy systems with two input variables,
-
The use of sparse regressions and metaheuristic optimization to train these systems.
1.3 Paper structure
2 High-order Takagi–Sugeno fuzzy system
-
Zero-order if \(P_j(x_1,x_2)=b_j\), where \(b_j\in {\mathbb {R}}\), which means that the consequent functions are constants (polynomial degree d is equal to zero) (Takagi and Sugeno 1985),
-
First-order if \(P_j(x_1,x_2)=w_{1j}x_1+v_{1j}x_2+b_j\), where \(w_{1j},v_{1j}\in {\mathbb {R}}\), which means that the consequent functions are linear (polynomial degree d is equal to one) (Takagi and Sugeno 1985),
-
High-order if \(P_j(x_1,x_2)=w_{mj}x_1^m +\ldots +w_{1j}x_1+v_{mj}x_2^m +\ldots +v_{1j}x_2+b_j\), where \(m\ge 2\), \(w_{kj},v_{kj}\in {\mathbb {R}}\), and \(k=2,3,\ldots ,m\), which means that the consequent functions are nonlinear (polynomial degree d is greater than one).
j | \(F_j(x_1)\) | \(G_j(x_2)\) | \(P_j(x_1,x_2)\) |
---|---|---|---|
1 | \(A_1(x_1)\) | \(B_1(x_2)\) | \(P_1(x_1,x_2)\) |
\(\vdots \) | \(\vdots \) | ||
\(A_1(x_1)\) | \(B_\rho (x_2)\) | ||
\(A_2(x_1)\) | \(B_1(x_2)\) | ||
\(\vdots \) | \(\vdots \) | \(\vdots \) | \(\vdots \) |
\(A_2(x_1)\) | \(B_\rho (x_2)\) | ||
\(\vdots \) | \(\vdots \) | ||
\(A_\rho (x_1)\) | \(B_1(x_2)\) | ||
\(\vdots \) | \(\vdots \) | ||
r | \(A_\rho (x_1)\) | \(B_\rho (x_2)\) | \(P_{r}(x_1,x_2)\) |
-
For the zero-order system as$$\begin{aligned} y = \sum _{j=1}^{r} \xi _j(x_1,x_2)b_j, \end{aligned}$$(6)
-
For the first-order and high-order systems as$$\begin{aligned} \begin{aligned} y&= \sum _{j=1}^{r} \xi _j(x_1,x_2)x_1^m w_{mj} + \ldots + \xi _j(x_1,x_2)x_1 w_{1j} \\&\quad + \xi _j(x_1,x_2)x_2^m v_{mj} + \ldots + \xi _j(x_1,x_2)x_2 v_{1j} \\&\quad + \xi _j(x_1,x_2)b_j. \end{aligned} \end{aligned}$$(7)
-
For the zero-order system as$$\begin{aligned} {\mathbf {h}}_j(x_1,x_2)&= \xi _j(x_1,x_2), \end{aligned}$$(11)$$\begin{aligned} {\mathbf {w}}_j&= b_j, \end{aligned}$$(12)
-
For the first-order and high-order systems as$$\begin{aligned} {\mathbf {h}}_j(x_1,x_2)&= [h_{mj},\ldots ,h_{1j},g_{mj},\ldots ,g_{1j},\xi _j], \end{aligned}$$(13)where \(\dim ({\mathbf {h}}_j)=\dim ({\mathbf {w}}_j^T)=2d+1\).$$\begin{aligned} {\mathbf {w}}_j&= [w_{mj},\ldots ,w_{1j},v_{mj},\ldots ,v_{1j},b_j]^T, \end{aligned}$$(14)
3 Training the consequent parameters
3.1 Ordinary least squares
3.2 Ridge regression
3.3 Sparse regressions
4 Training the antecedent parameters
4.1 Particle swarm optimization
-
Cognition component—attracts particles toward the local best position,
-
Social component—attracts particles toward the best position in the swarm.
4.2 Genetic algorithm
-
Selection—during this process, individuals called “parents” are selected through a fitness-based process. Individuals with a good value of the objective function (Sect. 5) are more often chosen for the next generation,
-
Crossover (recombination)—combines two “parents” to form “children” for the next generation; it is analogous to the crossover that takes place during sexual reproduction in biology. The new individuals have the characteristics of both parents,
-
Mutation—during the mutation process, an individual mutates that is random changes are introduced into the genotype. The purpose of this rule is to introduce diversity in the population that prevents the premature convergence of the algorithm.
4.3 Simulated annealing
5 Performance criterion
6 Design procedure for training fuzzy models
-
Non-sparse methods:
-
OLS: the method in which the fuzzy sets are defined by the user, while the polynomials are determined by the OLS regression,
-
RIDGE: the method in which the fuzzy sets are defined by the user, while the polynomials are determined by the ridge regression,
-
PSO-OLS: the method in which the fuzzy sets are determined by the PSO algorithm, while the polynomials are determined by the OLS regression,
-
PSO-RIDGE: the method in which the fuzzy sets are determined by the PSO algorithm, while the polynomials are determined by the ridge regression,
-
GA-OLS: the method in which the fuzzy sets are determined by the GA, while the polynomials are determined by the OLS regression,
-
GA-RIDGE: the method in which the fuzzy sets are determined by the GA, while the polynomials are determined by the ridge regression,
-
SA-OLS: the method in which the fuzzy sets are determined by the SA algorithm, while the polynomials are determined by the OLS regression,
-
SA-RIDGE: the method in which the fuzzy sets are determined by the SA algorithm, while the polynomials are determined by the ridge regression,
-
-
Sparse methods:
-
SR: the method in which the fuzzy sets are defined by the user, while the polynomials are determined by a sparse regression (SR), e.g., FS, LAR, LASSO or ENET,
-
PSO-SR: the method in which the fuzzy sets are determined by the PSO algorithm, while the polynomials are determined by a sparse regression,
-
GA-SR: the method in which the fuzzy sets are determined by the GA, while the polynomials are determined by a sparse regression,
-
SA-SR: the method in which the fuzzy sets are determined by the SA algorithm, while the polynomials are determined by a sparse regression.
-
Algorithm | \(\overline{\mathrm {RMSE}}\) | std | min | max | p | \({\overline{S}}\) | \({\overline{Q}}\) |
---|---|---|---|---|---|---|---|
OLS | \(4.805\mathrm {e-}02\) | \(4.563\mathrm {e-}02\) | \(2.702\mathrm {e-}02\) | \(17.70\mathrm {e-}02\) | - | 0 | - |
RIDGE | \(3.067\mathrm {e-}02\) | \(3.771\mathrm {e-}03\) | \(2.513\mathrm {e-}02\) | \(3.884\mathrm {e-}02\) | \(<0.05\) | 0 | 0.8192 |
FS | \(2.796\mathrm {e-}02\) | \(4.611\mathrm {e-}03\) | \(1.931\mathrm {e-}02\) | \(3.247\mathrm {e-}02\) | \(<0.05\) | 0.4422 | 0.5699 |
LAR | \(3.101\mathrm {e-}02\) | \(6.465\mathrm {e-}03\) | \(1.919\mathrm {e-}02\) | \(4.063\mathrm {e-}02\) | 0.1056 | 0.4756 | 0.5849 |
LASSO | \(3.194\mathrm {e-}02\) | \(5.447\mathrm {e-}03\) | \(2.280\mathrm {e-}02\) | \(4.063\mathrm {e-}02\) | 0.2324 | 0.5400 | 0.5624 |
ENET | \(3.195\mathrm {e-}02\) | \(5.450\mathrm {e-}03\) | \(2.280\mathrm {e-}02\) | \(4.063\mathrm {e-}02\) | 0.2324 | 0.5400 | 0.5625 |
PSO-OLS | \(2.952\mathrm {e-}05\) | \(4.093\mathrm {e-}05\) | \(5.756\mathrm {e-}06\) | \(11.69\mathrm {e-}05\) | \(<0.05\) | 0 | 0.5003 |
PSO-RIDGE | \(2.952\mathrm {e-}05\) | \(4.093\mathrm {e-}05\) | \(5.756\mathrm {e-}06\) | \(11.69\mathrm {e-}05\) | \(<0.05\) | 0 | 0.5003 |
PSO-FS | \(3.539\mathrm {e-}03\) | \(2.766\mathrm {e-}03\) | \(8.732\mathrm {e-}05\) | \(9.228\mathrm {e-}03\) | \(<0.05\) | 0.7689 | 0.1524 |
PSO-LAR | \(3.700\mathrm {e-}03\) | \(2.780\mathrm {e-}03\) | \(6.908\mathrm {e-}04\) | \(8.417\mathrm {e-}03\) | \(<0.05\) | 0.7489 | 0.1641 |
PSO-LASSO | * | * | * | * | * | * | * |
PSO-ENET | \(1.864\mathrm {e-}03\) | \(1.237\mathrm {e-}03\) | \(5.370\mathrm {e-}04\) | \(4.643\mathrm {e-}03\) | \(<0.05\) | 0.7489 | \(\mathbf {0.1450}\) |
GA-OLS | \(3.101\mathrm {e-}05\) | \(3.588\mathrm {e-}05\) | \(7.599\mathrm {e-}06\) | \(1.222\mathrm {e-}04\) | \(<0.05\) | 0 | 0.5003 |
GA-RIDGE | \(3.101\mathrm {e-}05\) | \(3.588\mathrm {e-}05\) | \(7.599\mathrm {e-}06\) | \(1.222\mathrm {e-}04\) | \(<0.05\) | 0 | 0.5003 |
GA-FS | \(4.488\mathrm {e-}03\) | \(2.629\mathrm {e-}03\) | \(1.812\mathrm {e-}03\) | \(9.691\mathrm {e-}03\) | \(<0.05\) | 0.7644 | 0.1645 |
GA-LAR | \(2.954\mathrm {e-}03\) | \(2.205\mathrm {e-}03\) | \(7.822\mathrm {e-}04\) | \(8.141\mathrm {e-}03\) | \(<0.05\) | 0.7200 | 0.1707 |
GA-LASSO | * | * | * | * | * | * | * |
GA-ENET | \(2.742\mathrm {e-}03\) | \(1.458\mathrm {e-}03\) | \(5.041\mathrm {e-}04\) | \(5.452\mathrm {e-}03\) | \(<0.05\) | 0.7489 | 0.1541 |
SA-OLS | \(3.380\mathrm {e-}04\) | \(1.146\mathrm {e-}04\) | \(1.534\mathrm {e-}04\) | \(5.330\mathrm {e-}04\) | \(<0.05\) | 0 | 0.5035 |
SA-RIDGE | \(3.380\mathrm {e-}04\) | \(1.146\mathrm {e-}04\) | \(1.534\mathrm {e-}04\) | \(5.330\mathrm {e-}04\) | \(<0.05\) | 0 | 0.5035 |
SA-FS | \(6.317\mathrm {e-}03\) | \(3.436\mathrm {e-}03\) | \(1.230\mathrm {e-}03\) | \(1.180\mathrm {e-}02\) | \(<0.05\) | 0.7711 | 0.1802 |
SA-LAR | \(4.993\mathrm {e-}03\) | \(3.820\mathrm {e-}03\) | \(1.115\mathrm {e-}03\) | \(1.412\mathrm {e-}02\) | \(<0.05\) | 0.7089 | 0.1975 |
SA-LASSO | * | * | * | * | * | * | * |
SA-ENET | * | * | * | * | * | * | * |
7 Experimental results
7.1 Experimental setup
Rule | p | \(\sigma \) | q | \( \delta \) | \(w_2\) | \(w_1\) | \(v_2\) | \(v_1\) | b |
---|---|---|---|---|---|---|---|---|---|
OLS | |||||||||
\(R_{1}\) | \(-\)1 | 0.4247 | 0 | 0.2123 | 5.296 | 11.51 | \(-\)6.266 | 0.8213 | 6.488 |
\(R_{2}\) | \(-\)1 | 0.4247 | 0.5 | 0.2123 | 0.1319 | \(-\)2.838 | 12.70 | \(-\)10.61 | 0.5341 |
\(R_{3}\) | \(-\)1 | 0.4247 | 1 | 0.2123 | 24.57 | 56.02 | \(-\)22.59 | 43.99 | 11.98 |
\(R_{4}\) | 0 | 0.4247 | 0 | 0.2123 | \(-\)5.941 | 0.0421 | 9.625 | 1.064 | \(-\)0.5714 |
\(R_{5}\) | 0 | 0.4247 | 0.5 | 0.2123 | 3.944 | 0.3770 | \(-\)15.51 | 14.26 | \(-\)3.450 |
\(R_{6}\) | 0 | 0.4247 | 1 | 0.2123 | \(-\)32.44 | \(-\)0.2202 | 19.21 | \(-\)39.19 | 16.17 |
\(R_{7}\) | 1 | 0.4247 | 0 | 0.2123 | 5.328 | \(-\)11.48 | 10.58 | 3.775 | 6.891 |
\(R_{8}\) | 1 | 0.4247 | 0.5 | 0.2123 | \(-\)1.701 | 6.309 | \(-\)16.25 | 16.67 | \(-\)8.504 |
\(R_{9}\) | 1 | 0.4247 | 1 | 0.2123 | 25.98 | \(-\)58.55 | 5.682 | \(-\)15.92 | 45.31 |
PSO-ENET | |||||||||
\(R_{1}\) | \(-\)0.9053 | 2.106 | 0.2191 | 0.4545 | \(-\)1.991 | 0 | 0 | 0 | 0 |
\(R_{2}\) | \(-\)0.9053 | 2.106 | 0.5484 | 0.3625 | 5.924 | 0 | 0 | 0 | 0 |
\(R_{3}\) | \(-\)0.9053 | 2.106 | 0.9693 | 0.3919 | \(-\)2.272 | 0 | 0 | \(-\)0.0165 | 0 |
\(R_{4}\) | \(-\)0.1909 | 1.613 | 0.2191 | 0.4545 | 0 | 0 | 0 | 0 | 0 |
\(R_{5}\) | \(-\)0.1909 | 1.613 | 0.5484 | 0.3625 | 0 | 0 | 0 | 0 | 0 |
\(R_{6}\) | \(-\)0.1909 | 1.613 | 0.9693 | 0.3919 | 0 | 0 | 0 | 0 | 0 |
\(R_{7}\) | 0.0284 | 1.959 | 0.2191 | 0.4545 | \(-\)2.376 | 0.0035 | 0 | 0 | \(-\)0.0147 |
\(R_{8}\) | 0.0284 | 1.959 | 0.5484 | 0.3625 | 7.038 | \(-\)0.0164 | 0 | 0 | 0.0394 |
\(R_{9}\) | 0.0284 | 1.959 | 0.9693 | 0.3919 | \(-\)2.719 | 0.0057 | 0 | 0 | 0 |
7.2 Implementation
regress
from the Matlab Statistics and Machine Learning Toolbox (MathWorks 2019b) has been used to apply the OLS regression. The ridge regression has been implemented in Matlab using a custom function.
forwardselection
, lar
, lasso
, and elasticnet
. These functions take the regression matrix \({{\mathbf {X}}}\) and the vector \({{\mathbf {y}}}\) as arguments. Moreover, the function elasticnet
has the regularization parameter \(\delta \). As the output, the described functions return the solution path in the form of the coefficients \({{\mathbf {w}}}\), from which the best solution can be selected.particleswarm
, ga
, and simulannealbnd
. These functions allow the solution to be obtained subject to the bounds defined by the user. They operate on the vector that contains the parameters of Gaussian membership functions:7.3 Results of experiment 1
Algorithm | \(\overline{\text {Time}}\,[s]\) | Algorithm | \(\overline{\text {Time}}\,[s]\) |
---|---|---|---|
Experiment 1 | |||
OLS | 0.0086 | GA-OLS | 41.53 |
RIDGE | 0.0090 | GA-RIDGE | 41.54 |
FS | 0.0610 | GA-FS | 41.93 |
LAR | 0.0732 | GA-LAR | 41.96 |
LASSO | 0.1697 | GA-LASSO | * |
ENET | 0.2422 | GA-ENET | 42.72 |
PSO-OLS | 39.15 | SA-OLS | 52.13 |
PSO-RIDGE | 39.15 | SA-RIDGE | 52.13 |
PSO-FS | 39.61 | SA-FS | 52.55 |
PSO-LAR | 39.61 | SA-LAR | 52.53 |
PSO-LASSO | * | SA-LASSO | * |
PSO-ENET | 40.28 | SA-ENET | * |
Experiment 2 | |||
OLS | 0.0095 | GA-OLS | 48.79 |
RIDGE | 0.0134 | GA-RIDGE | 48.79 |
FS | 0.0734 | GA-FS | 49.21 |
LAR | 0.0702 | GA-LAR | 49.25 |
LASSO | * | GA-LASSO | 50.57 |
ENET | 0.2589 | GA-ENET | 50.48 |
PSO-OLS | 47.99 | SA-OLS | 64.44 |
PSO-RIDGE | 47.99 | SA-RIDGE | 64.44 |
PSO-FS | 48.48 | SA-FS | 64.94 |
PSO-LAR | * | SA-LAR | 64.90 |
PSO-LASSO | * | SA-LASSO | 66.44 |
PSO-ENET | 49.48 | SA-ENET | 65.87 |
Algorithm | \(\overline{\mathrm {RMSE}}\) | std | min | max | p | \({\overline{S}}\) | \({\overline{Q}}\) |
---|---|---|---|---|---|---|---|
OLS | \(3.457\mathrm {e-}01\) | \(2.550\mathrm {e-}01\) | \(1.445\mathrm {e-}01\) | \(9.588\mathrm {e-}01\) | - | 0 | - |
RIDGE | \(2.606\mathrm {e-}01\) | \(1.485\mathrm {e-}01\) | \(1.263\mathrm {e-}01\) | \(5.187\mathrm {e-}01\) | \(<0.05\) | 0 | 0.8769 |
FS | \(7.437\mathrm {e-}02\) | \(1.386\mathrm {e-}02\) | \(5.487\mathrm {e-}02\) | \(9.488\mathrm {e-}02\) | \( <0.05 \) | 0.8756 | 0.1698 |
LAR | \(1.072\mathrm {e-}01\) | \(9.910\mathrm {e-}03\) | \(9.600\mathrm {e-}02\) | \(1.290\mathrm {e-}01\) | \( <0.05 \) | 0.9356 | 0.1873 |
LASSO | * | * | * | * | * | * | * |
ENET | \(1.082\mathrm {e-}01\) | \(5.401\mathrm {e-}03\) | \(9.801\mathrm {e-}02\) | \(1.162\mathrm {e-}01\) | \( <0.05 \) | 0.9622 | 0.1754 |
PSO-OLS | \(3.687\mathrm {e-}04\) | \(9.320\mathrm {e-}05\) | \(2.672\mathrm {e-}04\) | \(5.009\mathrm {e-}04\) | \( <0.05 \) | 0 | 0.5011 |
PSO-RIDGE | \(3.687\mathrm {e-}04\) | \(9.320\mathrm {e-}05\) | \(2.672\mathrm {e-}04\) | \(5.009\mathrm {e-}04\) | \( <0.05 \) | 0 | 0.5011 |
PSO-FS | \(3.403\mathrm {e-}02\) | \(1.920\mathrm {e-}02\) | \(1.267\mathrm {e-}02\) | \(7.570\mathrm {e-}02\) | \( <0.05 \) | 0.7956 | \(\mathbf {0.1515}\) |
PSO-LAR | * | * | * | * | * | * | * |
PSO-LASSO | * | * | * | * | * | * | * |
PSO-ENET | \(1.723\mathrm {e-}02\) | \(8.211\mathrm {e-}03\) | \(4.127\mathrm {e-}03\) | \(3.215\mathrm {e-}02\) | \( <0.05 \) | 0.7244 | 0.1627 |
GA-OLS | \(1.587\mathrm {e-}04\) | \(9.430\mathrm {e-}05\) | \(4.700\mathrm {e-}05\) | \(3.013\mathrm {e-}04\) | \( <0.05 \) | 0 | 0.5003 |
GA-RIDGE | \(1.587\mathrm {e-}04\) | \(9.430\mathrm {e-}05\) | \(4.700\mathrm {e-}05\) | \(3.013\mathrm {e-}04\) | \( <0.05 \) | 0 | 0.5003 |
GA-FS | \(3.726\mathrm {e-}02\) | \(7.710\mathrm {e-}03\) | \(2.865\mathrm {e-}02\) | \(5.049\mathrm {e-}02\) | \( <0.05 \) | 0.8022 | 0.1528 |
GA-LAR | \(2.943\mathrm {e-}02\) | \(1.524\mathrm {e-}02\) | \(5.868\mathrm {e-}03\) | \(4.783\mathrm {e-}02\) | \( <0.05 \) | 0.7356 | 0.1748 |
GA-LASSO | \(3.649\mathrm {e-}02\) | \(1.768\mathrm {e-}02\) | \(1.377\mathrm {e-}02\) | \(5.949\mathrm {e-}02\) | \( <0.05 \) | 0.7489 | 0.1783 |
GA-ENET | \(2.860\mathrm {e-}02\) | \(1.498\mathrm {e-}02\) | \(1.377\mathrm {e-}02\) | \(5.639\mathrm {e-}02\) | \( <0.05 \) | 0.7378 | 0.1725 |
SA-OLS | \(1.688\mathrm {e-}03\) | \(5.591\mathrm {e-}04\) | \(7.519\mathrm {e-}04\) | \(2.510\mathrm {e-}03\) | \( <0.05 \) | 0 | 0.5034 |
SA-RIDGE | \(1.688\mathrm {e-}03\) | \(5.591\mathrm {e-}04\) | \(7.519\mathrm {e-}04\) | \(2.510\mathrm {e-}03\) | \( <0.05 \) | 0 | 0.5034 |
SA-FS | \(3.121\mathrm {e-}02\) | \(1.720\mathrm {e-}02\) | \(6.798\mathrm {e-}03\) | \(4.917\mathrm {e-}02\) | \( <0.05 \) | 0.7733 | 0.1585 |
SA-LAR | \(3.149\mathrm {e-}02\) | \(1.334\mathrm {e-}02\) | \(1.281\mathrm {e-}02\) | \(5.217\mathrm {e-}02\) | \( <0.05 \) | 0.7489 | 0.1711 |
SA-LASSO | \(2.803\mathrm {e-}02\) | \(1.506\mathrm {e-}02\) | \(1.147\mathrm {e-}02\) | \(5.557\mathrm {e-}02\) | \( <0.05 \) | 0.7133 | 0.1839 |
SA-ENET | \(2.182\mathrm {e-}02\) | \(9.186\mathrm {e-}03\) | \(7.292\mathrm {e-}03\) | \(3.650\mathrm {e-}02\) | \( <0.05 \) | 0.7133 | 0.1749 |
7.4 Results of experiment 2
Rule | p | \(\sigma \) | q | \( \delta \) | \(w_2\) | \(w_1\) | \(v_2\) | \(v_1\) | b |
---|---|---|---|---|---|---|---|---|---|
OLS | |||||||||
\(R_{1}\) | \(-\)1 | 0.4247 | 0 | 0.2123 | 4.794 | 11.64 | 53.93 | 3.130 | 8.401 |
\(R_{2}\) | \(-\)1 | 0.4247 | 0.5 | 0.2123 | 2.591 | 1.369 | \(-\)57.28 | 57.33 | \(-\)17.15 |
\(R_{3}\) | \(-\)1 | 0.4247 | 1 | 0.2123 | 26.88 | 59.32 | 45.42 | \(-\)95.81 | 85.97 |
\(R_{4}\) | 0 | 0.4247 | 0 | 0.2123 | \(-\)7.663 | \(-\)0.0414 | 28.90 | 1.408 | \(-\)0.2997 |
\(R_{5}\) | 0 | 0.4247 | 0.5 | 0.2123 | 0.3362 | 2.946 | \(-\)31.22 | 30.64 | \(-\)8.697 |
\(R_{6}\) | 0 | 0.4247 | 1 | 0.2123 | \(-\)26.27 | 3.442 | 30.60 | \(-\)63.89 | 30.34 |
\(R_{7}\) | 1 | 0.4247 | 0 | 0.2123 | 3.393 | \(-\)9.364 | \(-\)415.1 | \(-\)6.422 | 0.6953 |
\(R_{8}\) | 1 | 0.4247 | 0.5 | 0.2123 | \(-\)0.5307 | \(-\)1.685 | 288.6 | \(-\)306.4 | 94.64 |
\(R_{9}\) | 1 | 0.4247 | 1 | 0.2123 | 15.21 | \(-\)35.50 | \(-\)232.4 | 484.2 | \(-\)234.7 |
PSO-FS | |||||||||
\(R_{1}\) | \(-\)0.7318 | 0.4725 | 0.6572 | 1.025 | 0 | 0 | 0 | 0 | 0 |
\(R_{2}\) | \(-\)0.7318 | 0.4725 | 0.6618 | 1.034 | 0 | \(-\)7.586 | 0 | 0 | 0 |
\(R_{3}\) | \(-\)0.7318 | 0.4725 | 0.5186 | 0.3502 | 0 | 0 | 0 | 0 | \(-\)28.06 |
\(R_{4}\) | \(-\)0.9964 | 0.9916 | 0.6572 | 1.025 | 0 | \(-\)4.925 | 0 | 0 | 0 |
\(R_{5}\) | \(-\)0.9964 | 0.9916 | 0.6618 | 1.034 | \(-\)11.84 | 0 | 0 | 0 | 0.4208 |
\(R_{6}\) | \(-\)0.9964 | 0.9916 | 0.5186 | 0.3502 | 0 | \(-\)10.55 | 0 | 0 | 13.37 |
\(R_{7}\) | 1 | 0.3468 | 0.6572 | 1.025 | 0 | 0 | 0 | 0 | 0 |
\(R_{8}\) | 1 | 0.3468 | 0.6618 | 1.034 | 2.035 | 0 | 0 | 0 | 0 |
\(R_{9}\) | 1 | 0.3468 | 0.5186 | 0.3502 | \(-\)7.655 | 0 | 0 | 0 | 7.450 |