Skip to main content
Erschienen in: Neural Computing and Applications 5/2010

Open Access 01.07.2010 | Original Article

Research on an online self-organizing radial basis function neural network

verfasst von: Honggui Han, Qili Chen, Junfei Qiao

Erschienen in: Neural Computing and Applications | Ausgabe 5/2010

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A new growing and pruning algorithm is proposed for radial basis function (RBF) neural network structure design in this paper, which is named as self-organizing RBF (SORBF). The structure of the RBF neural network is introduced in this paper first, and then the growing and pruning algorithm is used to design the structure of the RBF neural network automatically. The growing and pruning approach is based on the radius of the receptive field of the RBF nodes. Meanwhile, the parameters adjusting algorithms are proposed for the whole RBF neural network. The performance of the proposed method is evaluated through functions approximation and dynamic system identification. Then, the method is used to capture the biochemical oxygen demand (BOD) concentration in a wastewater treatment system. Experimental results show that the proposed method is efficient for network structure optimization, and it achieves better performance than some of the existing algorithms.

1 Introduction

Radial basis function (RBF) neural networks offer an efficient mechanism for approximating complex nonlinear functions [1], pattern recognition [2], modeling and controlling dynamic systems [3, 4] from the input–output data. In fact, the selection of RBF neural network for a special application is dependent on its structure and learning abilities. Recently, with the research objects becoming more and more complex, the conventional RBF neural network with the fixed structures can not satisfy the requirement. The most difficult bottlenecks are the initial number of the hidden nodes, the initial position, and width of the RBF nodes.
In order to solve the previously mentioned problems, some colleagues have found several kinds of methods. A significant contribution to adding nodes to the neural network is made by Platt [5] through the development of a resource allocation network (RAN), in which hidden nodes are added sequentially based on the novelty of the new data. Karayiannis et al. [6] propose a growing RBF neural network, in which a merging supervised and unsupervised learning algorithm is presented. The structure of the RBF neural network can be changed by the research objects, but this method does not consider the redundant nodes in the hidden layer. On the other side, Yingwei et al. [7] introduce a pruning strategy based on the relative contribution of each hidden neuron to the overall network output to reduce the complex of the RBF neural network. The resulting neural network is minimal in theory by the presented method. Other pruning methods for RBF neural networks have been proposed by Salmerón et al. [8] and Rojas et al. [9]. Unfortunately, one of the disadvantages of pruning algorithms is that the computational cost is heavy, since the majority of the training time is spent on networks larger than necessary. A promising alternative to growing or pruning alone is to combine them. Wu et al. [10] propose a hierarchical online self-organizing learning algorithm for dynamic RBF neural networks. However, all the widths of Gaussian membership functions are the same. This usually does not coincide with the reality, especially when input variables have significantly different operating intervals. Guang-Bin Huang et al. [11] propose a simple sequential learning algorithm for RBF neural network referred to growing and pruning algorithm for RBF (GAP-RBF). And in the later, they advance this GAP-RBF to GGAP-RBF [12]. Both GAP-RBF and GGAP-RBF neural networks decide to add new nodes or reduce the redundant nodes based on the “significance” of a neuron and links it to the learning accuracy. However, as these algorithms are online procedures, they do not optimize the network over all past training data. Besides, the network initialization requires a prior knowledge about the data which may not be available.
For the position and the width of the RBF nodes in the RBF neural network, there are some other papers which adjust the parameters of the position and the width in the learning process. Juan Ignacio et al. [13] analyze the bounded-ness of the coefficients involved in Gaussian expansion series. These series arise from the reconstruction of band-limited functions, applying the sampling theorem with Gaussians as the reconstruction filters. The width can be changed by the approximation errors and consequently to the accuracy of the estimation. Sheng Chen et al. [14] introduce a powerful symmetric RBF neural network classifier for nonlinear detection. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF neural network is capable of approaching the optimal classification performance using noisy training data. The RBF neural network construction process is robust to the choice of the RBF width and is computationally efficient. And the paper [15] shows an adaptive RBF neural network method to estimate two immeasurable physical parameters on-line and to compensate for the model uncertainty and engine time varying dynamics. The adaptive law of the neural network is based on the Lyapunov theory, so that this RBF neural network can adjust the width of the RBFs, and the stability of the network was guaranteed. In recent years, wavelet neural networks [16, 17] are considered to be better and simpler alternatives to adjust the position and width of the RBF nodes. In the wavelet RBF neural networks (WRBF), the activation function of hidden layer nodes is substituted with a type of wavelet functions. In these new networks, the position and dilation of the wavelet are fixed, and only the weights are optimized. Although there are other papers [1822] which are about the position and width of the RBF nodes, few of them consider the optimal number of the hidden layer nodes of the RBF network simultaneity.
In this paper, a new online SORBF is proposed for the RBF structure design which can add the new RBF nodes and reduce the redundant RBF nodes in the hidden layer. In the structure design phase, the growing and pruning method is used to decide the satisfaction of the architecture of the neural network, which is based on the radius of the receptive field of the RBF nodes and the stable error of the requirement. At the same time, the position and width of the RBF nodes in the hidden layer are adjusted by the gradient-descend algorithm. For highlighting the effect of the SORBF, the performance of the SORBF is compared with other well-known RBF neural networks on some benchmark problems in the nonlinear functions approximation area and nonlinear dynamic system identification. And then, this SORBF is used to capture the key variables in a wastewater treatment system. The results indicate the SORBF can provide comparable generalization performance with less computational complexity.
This paper is organized as follows. Section 2 describes the details of RBF neural network. Section 3 presents the RBF nodes selection algorithm based on self-organizing method. The algorithms for the position and width of the RBF nodes adjusting are also given in this section. Experimental results of the simulations are presented in Sect. 4, in order to demonstrate the superior performances of this proposed SORBF algorithm, the results are compared with other self-organizing RBF algorithms. Section 5 summarizes the conclusions from this study.

2 Radial basis function (RBF) neural network

The standard radial basis function (RBF) neural network consists of three layers: an input layer, a hidden layer, and an output layer. Figure 1 shows a schematic representation of the RBF network. The number of the nodes in the input and output layers is decided by the research objects. The nodes in the input layer and output layer represent the vector from an input space and a desired network response, respectively. Through a defined learning algorithm, the error between the actual and desired response is minimized by optimization criterions.
As depicted in Fig. 1, the ith output node of the RBF network can be expressed as follows:
$$ y_{i} = \sum\limits_{k = 1}^{N} {\varphi_{k} (\left\| {x - v_{k} } \right\|)w_{ik} } ,i = 1,2, \ldots ,m $$
(1)
where x = [x 1, x 2, …, x n ] T is an input value; n is the number of input node; c k is the center of the kth RBF node in the hidden layer, k = 1, 2, …, N, and N is the number of hidden nodes; ||x − v k || denotes Euclidean distance between v k and x; \( \varphi_{k} ( \bullet ) \) is the nonlinear transfer function of the kth RBF node; w ik is the weighting value between the kth RBF node and the ith output node; and m is the number of output nodes.
Equation (1) reveals that the output of the network is computed as a weighted sum of the hidden layer outputs. The nonlinear output of the hidden layer is described as \( \varphi_{k} ( \bullet ) \), which are radial symmetrical. In this paper, the function chosen for this neural network is Gaussian function, and the description is shown as follows:
$$ \varphi (x) = e^{{\left( { - {\frac{{(x - v)^{2} }}{{\delta^{2} }}}} \right)}} $$
(2)
where v and δ are the parameter of position and width of the RBF nodes. Figure 2 shows the RBF nodes with different width and position. The activation function most commonly used for classification and regression problems in the Gaussian function, because it is continuous, differentiable; it provides a softer output and improves the interpolation capabilities. The procedure to design an RBF neural network for functional approximation problem is shown below:
1.
Initialize number N of the RBF nodes;
 
2.
Initialize position v of the RBF nodes;
 
3.
Initialize the width δ of the RBF nodes;
 
4.
Calculate the optimum value for the weights w;
 
5.
Apply local search algorithm to adjust the widths and the positions of the RBF nodes.
 
Based on 1–3, the initializations of the number of the RBF nodes are very important, if an unsuitable initialization number of the RBF nodes is performed, the training time will be heavy, and the approximation error hard to be achieved. The reason is that during the execution of a local search algorithm to make a fine tuning of the positions and widths of the RBF nodes, the search algorithm is hard to offset the flaw of the architecture. So, a self-organizing algorithm is necessary for the RBF structure design. This algorithm can be used to add new RBF nodes and reduce redundancy RBF nodes, which solves the problem of the architecture. Meanwhile, the position and width of the RBF nodes are very important, if the position and width are not appropriate for the RBF nodes, there is a possibility of falling into a bad local minimum. When the RBF neural network selects the correct number of the RBF nodes, the parameters of the weights, the position and width of the RBF nodes will be adjusted at the same time.

3 Self-organizing RBF neural network

The key problems of the RBF neural network are the structure design and the parameters-learning approaches. The structure design mainly relies on the RBF nodes in the hidden layer. This SORBF algorithm can construct the RBF neural network automatically. The procedure to design the SORBF neural network for the application problems is shown below:
1.
Initialize the number of RBF nodes N;
 
2.
Initialize the position v and width δ of the RBF nodes;
 
3.
Initialize the radius r of the receptive field for each RBF node;
 
4.
Calculate the optimum number of the RBF nodes;
 
5.
Apply local search algorithm to adjust the position v, the width δ and radius r;
 
6.
Calculate the optimum value for the weights w.
 
This SORBF neural network can add or reduce the RBF nodes. It is a modification of the neural network structure design method with the aim of catching the suitable RBF neural network architecture.

3.1 RBF nodes selection method

The main steps of the RBF nodes selection method are shown as follows:
1.
Initialization, k = 1; randomly weight value connecting the output layer with the hidden layer W = (w ij ) N×m , 0 ≤ w ij  ≤ 1; the number of the nodes in hidden layer are pre-given as N. The radius r of the receptive field for each RBF node is given initially as r 0, the expected error is E d . The positions v of the RBF nodes are given randomly, and the width of the RBF nodes is given initially as δ 0; i = 1, 2, …, N.
 
2.
Find the winner node x in hidden layer, the winner node at time k can be ensured by:
$$ \left\| {v_{x} - P(k)} \right\| = \mathop {\min }\limits_{x = 1,2, \ldots ,N} \left\| {v_{x} - P(k)} \right\|. $$
(3)
And ||·|| is the Euclidean distance, P(k) is the input rector at time k.
 
3.
Adding new RBF nodes or reducing redundant RBF nodes.
If: E(k) > αE d , and \( \left\| {v_{x} - P(k)} \right\| > \beta r_{x} , \) go to step 4;
If: E(k) ≤ αE d , and \( \left\| {v_{x} - P(k)} \right\| > \beta r_{x} , \) go to step 6;
If: E(k) ≤ αE d , and \( \left\| {v_{x} - P(k)} \right\| \le \beta r_{x} , \) go to step 5;
If: E(k) > αE d , and \( \left\| {v_{x} - P(k)} \right\| \le \beta r_{x} , \) go to step 6.
where 1 < α < 2, and 1 < β < 2.
 
4.
Adding a new RBF node.
4.1
Finding out the nearest RBF node v x connected with P(k) by the formula (3) in the hidden layer.
 
4.2
Inserting a new RBF node r between the node x and P(k), the positions and the width values of the new RBF node are as follows:
$$ v_{r} = aP(k) + (1 - a)v_{x} . $$
$$ \delta_{r} = \delta_{0} $$
(4)
 
 
where, a is a plus constant less than 1.
4.3
Initializing radius r r of the receptive field for the new RBF node r:
 
$$ r_{r}^{{}} = r_{0} . $$
(5)
5.
Deleting a redundant RBF node.
5.1
Computing the distance d x between the node v x and the nearest node f. If d x  < r x and d x  < r f , go to next; else, break and go to (6).
 
5.2
Deleting the node x or f (because the widths of these nodes are not the same, the final node left according to the value of the width).
 
 
If r f  < r x , deleting the node v f ; else deleting the node v x .
5.3
Modifying the radius value r x of the receptive field for the retained node (we assume the left node is x).
 
$$ r_{x} = br_{x} + (1 - b)r_{f} $$
(6)
where b is a plus constant more than 0.5 and less than 1.
6.
Updating the positions and widths of the RBF nodes. Stopping until catching the training time or expected error E d .
 
The RBF node selection method is essentially implemented to seek the suitable RBF nodes in the hidden layer, where an input acts as the principle to access the respective nodes containing the adjustable weight parameters to compute the activity of RBF nodes. The self-organizing algorithm searches the correct RBF node response to each input vector by choosing the minimum Euclidean distance. For each input, the SORBF can find a suitable RBF node or change the activity nodes based on it. The criterion of the RBF selection is based on the radius of the receptive field.
According to the former explanation of the RBF nodes selection, the structure of the initial RBF can be modified along with the learning process. The final architecture of the RBF neural network is suitable for the current objects. Following these steps, the SORBF will adjust the other parameters of the RBF neural network.

3.2 Adjusting radius of the receptive field for the RBF nodes

In this paper, the radius is used to judge whether the structure of the neural network should be reset or not. The radius of the receptive field can affect the capabilities of the final neural network. It is one of the essentials which relate to the results of the RBF neural network.
The radiuses of the receptive field adjust along with the learning process based on the winner times. The principles are described as follows (we take the node x for example):
$$ r_{x} = \left\{ {\begin{array}{ll} {\varepsilon_{x} r_{x} } &{{\text{if}}\;x\;{\text{is}}\;{\text{a}}\;{\text{winner}}} \\ {\tau_{x} r_{x} } & {{\text{if}}\;x\;{\text{is}}\;{\text{not}}\;{\text{a}}\;{\text{winner}}} \\ \end{array} } \right. $$
(7)
where ε x , τ x are the weight value-modifying parameters of node. In fact, ε x is more than 1, and τ x is less than 1. The real values of these two parameters are computed as:
$$ \varepsilon_{x} = 1 + {\frac{1}{{\sqrt {2M_{x} } }}}. $$
$$ \tau_{x} = 1 - {\frac{1}{{10\sqrt {M_{x} } }}}. $$
(8)
where M x is the winner times of the RBF node v x . In order to simplify the computing steps, we give ε x as 1.01 and τ x as 0.99. But the constant values may bring some problems. For example, the structures of the neural networks change acutely in the complex systems, if these two parameters are the constant values, the structures change slowly, finally, and they can hardly obtain the optimal networks.

3.3 Adjusting weights, positions, and widths

This RBF type function has three parameters: the position v; the width δ of the RBF nodes; and the weights w. In fact, these three parameters relate to the final capabilities of the RBF neural networks directly.
In order to train a neural network, the parameter-adjusting algorithm is based on the mean squared error (MSE) which is defined as:
$$ E(t) = {\frac{1}{{N_{s} }}}\sum\limits_{t = 1}^{{N_{s} }} {(y_{d} (t) - y(t))^{2} } $$
(9)
where N s is the total number of the samples, y d (t) is the expected output of the t step and y(t) is the neural network output of the t step. The goal of this method is to reach E(t) < E d by learning. E d is the expected stable error. The details of the adjusting process are as follows:
1.
The weights w;
$$ w_{ij} (t + 1) = w_{ij} (t) - \eta_{1} {\frac{\partial E(t)}{{\partial w_{ij} (t)}}}.\quad i = 1,2, \ldots ,N;j = 1,2, \ldots ,m. $$
(10)
where η 1 is a plus constant, and it is less than 1.
 
2.
The width δ of the RBF nodes;
$$ \delta_{i} (t + 1) = \delta_{i} (t) - \eta_{2}\; {\frac{\partial E(t)}{{\partial \delta_{i} (t)}}}.\quad i = 1,2, \ldots ,N. $$
(11)
where η 2 is a plus constant, and it is less than 1.
 
3.
The position v of the RBF nodes;
 
In Sect. 3.1, the position v of the new inserting nodes has been discussed, and the position v of the other RBF nodes will be discussed here.
$$ v_{i} (t + 1) = \left\{ {\begin{array}{ll} {v_{i} (t) - \eta_{3} (P(k) - v_{i} (t))} & {{\text{if}}\;v_{i} \;{\text{is}}\;{\text{the}}\;{\text{winner}}} \\ {v_{i} (t) - \eta_{4} {\frac{\partial E(t)}{{\partial v_{i} (t)}}}} & {\text{others}} \\ \end{array} } \right.\quad i = 1,2, \ldots ,N. $$
(12)
where η 3, η 4 are the plus constants, and they are less than 1.

3.4 Computational complexity and memory requirements

In this SORBF algorithm, the structure has to be updated after pruning or addition of a neuron. This requires updating the RBF nodes using (5). The time required is, therefore, of the order O(N s TM 2), and the memory required is of the order O(N s M). Where N s is the number of training date, T is the number of times the structure needs to be updated by adding or pruning a neuron, and M = N × (n + 1), in which n is the number of input variables, and N is the number of RBF nodes. In order to show the good performances of SORBF, the computational complexity and memory requirements of SORBF are compared with other dynamic adaptation methods such as DFNN [10] and GAP-RBF [11]. The DFNN used the linear least square (LLS) algorithm in batch mode, the time required in the approach based on LLS algorithm was of the order O(N s 2 M 2), and the memory required is of the order O(N s 2 M). Usually, T ≪ N s . Besides, GAP-RBF needs only one-step (times and division) operation for pruning checking and a simple sparse matrix operation for parameter adjustment at each step by using the EKF (extended Kalman filter), since only the nearest neuron is checked for significance. The time required is, therefore, of the order O(N s TM 2), and the memory required is of the order O(N s M). So, the computational complexity and memory requirements of GAP-RBF are the same as SORBF, and the time required for DFNN is more than that of SORBF.

4 Simulations

To demonstrate the effectiveness of the proposed algorithm, there are three examples analyzed in this paper. They are functions approximation, nonlinear systems identification, and the nonlinear dynamic systems modeling. In order to show the performances of this SORBF algorithm, the results of the approximation and the identification are compared with other algorithms such as DFNN [10] and GAP-RBF [11].

4.1 Function approximation

A common function which was widely used to demonstrate the effect of the algorithms [11] is used in this paper:
$$ y = 0.8 \times e^{ - 0.2x} \times \sin (10x). $$
(13)
For each trial, the size of training samples is 200, and the size of testing samples is 200, the values of x are randomly distributed in the interval [0, 2].
The real output at step k is y(k), the required value at time k is y d (k), the error at step k will be y(k) = y d (k) − y(k). The goal training MSE is 0.01, the initial radius of every hidden node is 0.1, the initial weight of every hidden node is randomly given in the interval [0, 1], and the initial RBF nodes of SORBF are 2. The parameters a and b are given as 0.8 and 0.9 in this paper. The simulation results are shown as follows.
Figure 3 shows the approximating results by the SORBF, the result demonstrates SORBF can approximate this function exactly. Figure 4 shows the error values in the approximating process within 1,000 steps; Fig. 5 shows the dynamic number of the nodes in the training process; these two figures describe when the nodes add or eliminate in the hidden layer, the error value will shake; and finally, the error can catch the goal quickly. Figure 6 shows RBF nodes with different width and position after training; Fig. 7 shows weight values of the RBF nodes after training. Some averaged values as measures of the performances are selected for this algorithm of this example by the 20 independent runs: the CPU running time of the training process, the test MSE, and the number of preserved RBF nodes in the hidden layer. The detail results compared with other different algorithms are given in Table 1. Based on the results, this proposed SORBF owns better performances than DFNN and GAP-RBF. The results prove that this proposed SORBF demonstrates a good performance for this function approximation. It should be noticed that this SORBF algorithm used for this functions approximation is insensitive to the initial structure of the neural network. The final structure of this SORBF is simpler; the memory space is fewer because of the simple structure.
Table 1
The performances comparison with different algorithms
Algorithm
CPU Time (s)
Training error
Testing error
No. of nodes
DFNN
62.32
0.01
0.0861
25
GAP-RBF
26.86
0.01
0.0415
19
SORBF
22.67
0.01
0.0248
12

4.2 Nonlinear dynamic system identification

The plant is given as follows. It was used in [10] and some other papers to demonstrate the effect of the algorithms:
$$ y(t + 1) = {\frac{y(t)y(t - 1)[y(t) + 2.5]}{{1 + y^{2} (t) + y^{2} (t - 1)}}} + u(t). $$
(14)
$$ t \in [1,100],y(0) = 0,y(1) = 0,u(t) = \sin \left( {{\frac{2\pi t}{25}}} \right). $$
A set of 100 input-target data is chosen as training data. The different initializations for the two cases are as follows: Case B.1, the initial number of RBF nodes is 2. Case B.2, the initial number of RBF nodes is 30. The training MSE for this example is 0.01. The other initializations of SORBF, DFNN, and GAP-RBF are the same as example A. This model is identified in series–parallel mode, as given below:
$$ \hat{y}({\text{t}} + 1) = f(y(t),y(t - 1),u(t)). $$
(15)
There are three inputs (y(t), y(t + 1), u(t)) and one output y(t + 1) in the networks. Another 100 input-target data in the interval [301, 400] are chosen as the testing data. The results are shown as Figs. 8, 9, 10, and 11 (Case B.1). Some averaged values as measures of the performances are selected for this algorithm of the two cases by the 20 independent runs: the CPU running time of the training process, the test MSE, and the number of preserved RBF nodes in the hidden layer.
As shown in Fig. 10, the SORBF organizes its structure with seven nodes after training (Case B.1). This is an important issue for the practical applications, and the structure is not changed after about 500 steps. The widths and the other parameters are adjusted continuously until achieving the required error. Note that in the learning process, the nodes may be added when the error is large, or the convergence speed is slow. This phenomenon is the same as the self-organizing strategy. Figure 9 shows the error value in the learning process within 1,000 steps (Case B.1). This illuminates the learning performance of the SORBF. The detail results compared with other different algorithms are given in Table 2. The results prove that the SORBF demonstrates a good performance in this example.
Table 2
The performance comparison of different algorithms
Approaches
Case B.1: with 2 initial hidden nodes
Case B.2: with 30 initial hidden nodes
CPU time (min)
Testing MSE
No. of RBF nodes
CPU time (min)
Testing MSE
No. of RBF nodes
DFNN
2.554
0.0264
15
3.864
0.0164
16
GAP-RBF
0.823
0.0212
12
1.223
0.0112
13
SORBF
0.521
0.0112
7
0.987
0.0112
8

4.3 Soft measurement for BOD

Rapid, accurate, and reliable measurements of BOD are a very desirable basis for monitoring or controlling wastewater treatment. Unfortunately, producing satisfactory BOD using hardware instrumentation has proved to be difficult. This paper addresses the issue of BOD estimation using a model-based approach which is relied on SORBF. The model estimates the BOD of the settled sewage using suspend solids (SS), chemical oxygen demand (COD), and PH data. The model is straightforward to apply online and offer a method of estimating BOD in real-time that is likely to be cheaper, more reliable and easier to maintain than hardware instrumentation. The structure of soft measurement technique based on SORBF is given as Fig. 12.
The SS, COD, and PH data are used as inputs for the model to estimate the settled sewage BOD. The initial structure of the neural network is 3-2-1; the samples used are from a wastewater treatment plant of Beijing in 2006. Hundred groups are for training, and another 100 groups are for validating. The accuracy of the model estimate was quantitated using the absolute error Ee which is defined as follows:
$$ Ee(t) = \left| {y_{d} (t) - y(t)} \right| $$
(16)
where y d (t) is the tth sample of settled sewage BOD data and y(t) is model estimate of the tth sample of settled sewage BOD data. The results are shown Figs. 13, 14, 15, 16, 17, and 18.
Figure 13 shows the results of BOD after training; Fig. 14 describes the absolute error value of the modeling results after training; the absolute errors are less than 1 mg/L which are highly accurate. Based on the results, SORBF is demonstrated to be suitable for the BOD soft measurement online. Figure 15 shows the dynamic RBF nodes within 1,000 steps; and Fig. 16 describes the MSE error value of the training process, these results also illuminate the good performances of the SORBF. Figure 17 shows the predictions results of BOD; Fig. 18 shows the absolute error value of the predicting process. The results have demonstrated that the BOD trends in the settled sewage could be predicted with acceptable accuracy using only SS, COD, and PH data as model inputs. This SORBF-based approach is relatively straightforward to implement online, it could offer real-time predictions of BOD, whereas hardware instruments typically measure short-term BOD. It can be concluded that this is a significant feature, since BOD is the more commonly used and readily understood measure. Finally, this type of SORBF-based approach has the potential to be used in estimating a range of variables which are typically troublesome to measure using hardware. This suggests that such new algorithm can be relatively a cost-effective approach for measuring and other useful applications.

5 Conclusions and discussion

A new self-organizing RBF (SORBF) algorithm is proposed in this paper to design as well as train RBF. Neither the number of nodes in the hidden layer nor the parameters need to be predefined and fixed. They are adjusted automatically in the learning process. And then, two examples are used to demonstrate the effect of this algorithm in contrast to DFNN and GAP-RBF. The results of these two examples prove that this SORBF performs better than the other algorithms. Finally, this SORBF is used to measure and predict the BOD value online. This type of SORBF-based approach offers a promisingly inexpensive approach to real-time measurement of variables that have typically proved difficult to measure reliably using hardware.

Acknowledgments

The authors would like to thank Prof. Guo-Qiang Bi and Prof. Huan-you Zhang for reading the manuscript and providing valuable comments. And, we would like to express our deepest gratitude to all anonymous reviewers and the editor-in-chief for their valuable comments. This work was supported by the National 863 Scheme Foundation of China under Grant 2007AA04Z160 and 2007A-A04Z160, National Science Foundation of China under Grants 60674066 and 60873043, Ph.D. Program Foundation from Ministry of Chinese Education under Grant 200800050004, and Beijing Municipal Natural Science Foundation under Grant 4092010.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Open AccessThis is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://​creativecommons.​org/​licenses/​by-nc/​2.​0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Literatur
1.
Zurück zum Zitat Nam MD, Thanh TC (2003) Approximation of function and its derivatives using radial basis function networks. Appl Math Model 27(3):197–220MATHCrossRef Nam MD, Thanh TC (2003) Approximation of function and its derivatives using radial basis function networks. Appl Math Model 27(3):197–220MATHCrossRef
2.
Zurück zum Zitat Sing JK, Basu DK, Nasipuri M, Kundu M (2007) Face recognition using point symmetry distance-based RBF network. Appl Soft Comput 7(1):58–70CrossRef Sing JK, Basu DK, Nasipuri M, Kundu M (2007) Face recognition using point symmetry distance-based RBF network. Appl Soft Comput 7(1):58–70CrossRef
3.
Zurück zum Zitat Zhao T (2008) RBFN-based decentralized adaptive control of a class of large-scale non-affine nonlinear systems. Neural Comput Appl 17(4):357–364CrossRef Zhao T (2008) RBFN-based decentralized adaptive control of a class of large-scale non-affine nonlinear systems. Neural Comput Appl 17(4):357–364CrossRef
4.
Zurück zum Zitat Ram D, Srivastava L, Pandit M, Sharma J (2007) Corrective action planning using RBF neural network. Appl Soft Comput 7(3):1055–1063CrossRef Ram D, Srivastava L, Pandit M, Sharma J (2007) Corrective action planning using RBF neural network. Appl Soft Comput 7(3):1055–1063CrossRef
5.
6.
Zurück zum Zitat Karayiannis NB, Mi GW (1997) Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques. IEEE Trans Neural Netw 8(6):1492–1506CrossRef Karayiannis NB, Mi GW (1997) Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques. IEEE Trans Neural Netw 8(6):1492–1506CrossRef
7.
Zurück zum Zitat Yingwei L, Sundararajan N, Saratchandran P (1997) A sequential learning scheme for function approximation using minimal radial basis function (RBF) neural networks. Neural Comput 9(2):461–478MATHCrossRef Yingwei L, Sundararajan N, Saratchandran P (1997) A sequential learning scheme for function approximation using minimal radial basis function (RBF) neural networks. Neural Comput 9(2):461–478MATHCrossRef
8.
Zurück zum Zitat Salmerón M, Ortega J, Puntonet CG, Prieto A, Rojas I (2002) SSA, SVD, QR-cp, and RBF model reduction. In Lecture notes in computer science, vol 2415. Springer, Germany, pp 589–594 Salmerón M, Ortega J, Puntonet CG, Prieto A, Rojas I (2002) SSA, SVD, QR-cp, and RBF model reduction. In Lecture notes in computer science, vol 2415. Springer, Germany, pp 589–594
9.
Zurück zum Zitat Rojas I, Pomares H, Bernier JL, Ortega J, Pino B, Pelayo FJ, Prieto A (2002) Time series analysis using normalized PG-RBF network with regression weights. Neurocomputing 42:267–285MATHCrossRef Rojas I, Pomares H, Bernier JL, Ortega J, Pino B, Pelayo FJ, Prieto A (2002) Time series analysis using normalized PG-RBF network with regression weights. Neurocomputing 42:267–285MATHCrossRef
10.
Zurück zum Zitat Wu S, Er MJ, Gao Y (2002) A fast approach for automatic generation of fuzzy rules by generalized dynamic fuzzy neural networks. IEEE Trans Fuzzy Syst 9(4):578–584 Wu S, Er MJ, Gao Y (2002) A fast approach for automatic generation of fuzzy rules by generalized dynamic fuzzy neural networks. IEEE Trans Fuzzy Syst 9(4):578–584
11.
Zurück zum Zitat Huang G-B, Saratchandran P, Sundararajan N (2004) An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks. IEEE Trans Syst Man Cybern B 34(6):2284–2292CrossRef Huang G-B, Saratchandran P, Sundararajan N (2004) An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks. IEEE Trans Syst Man Cybern B 34(6):2284–2292CrossRef
12.
Zurück zum Zitat Huang G-B, Saratchandran P, Sundararajan N (2005) A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans Neural Netw 16(1):57–67CrossRef Huang G-B, Saratchandran P, Sundararajan N (2005) A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans Neural Netw 16(1):57–67CrossRef
13.
Zurück zum Zitat Mulero-Martinez JI (2007) Boundedness of the nominal coefficients in Gaussian RBF neural networks. Neurocomputing 71(1–3):197–220CrossRef Mulero-Martinez JI (2007) Boundedness of the nominal coefficients in Gaussian RBF neural networks. Neurocomputing 71(1–3):197–220CrossRef
14.
Zurück zum Zitat Chen S, Wolfgang A, Harris CJ, Hanzo L (2008) Symmetric RBF classifier for nonlinear detection in multiple-antenna-aided systems. IEEE Trans Neural Netw 19(5):737–745CrossRef Chen S, Wolfgang A, Harris CJ, Hanzo L (2008) Symmetric RBF classifier for nonlinear detection in multiple-antenna-aided systems. IEEE Trans Neural Netw 19(5):737–745CrossRef
15.
Zurück zum Zitat Wang S, Yu DL (2008) Adaptive RBF network for parameter estimation and stable air–fuel ratio control. Neural Netw 21(1):102–112CrossRef Wang S, Yu DL (2008) Adaptive RBF network for parameter estimation and stable air–fuel ratio control. Neural Netw 21(1):102–112CrossRef
16.
Zurück zum Zitat Jin N, Liu D (2008) Wavelet basis function neural networks for sequential learning. IEEE Trans Neural Netw 19(3):535–540CrossRef Jin N, Liu D (2008) Wavelet basis function neural networks for sequential learning. IEEE Trans Neural Netw 19(3):535–540CrossRef
17.
Zurück zum Zitat Gholizadeh S, Salajegheh E, Torkzadeh P (2008) Structural optimization with frequency constraints by genetic algorithm using wavelet radial basis function neural network. J Sound Vib 312(1–2):316–331CrossRef Gholizadeh S, Salajegheh E, Torkzadeh P (2008) Structural optimization with frequency constraints by genetic algorithm using wavelet radial basis function neural network. J Sound Vib 312(1–2):316–331CrossRef
18.
Zurück zum Zitat Suetake N, Uchino E (2007) An RBFN–Wiener hybrid filters using higher order signal statistics. Appl Soft Comput 7(3):915–922CrossRef Suetake N, Uchino E (2007) An RBFN–Wiener hybrid filters using higher order signal statistics. Appl Soft Comput 7(3):915–922CrossRef
19.
Zurück zum Zitat Falcao AO, Langlois T, Wichert A (2007) Flexible kernels for RBF networks. Neurocomputing 69(16–18):2356–2359 Falcao AO, Langlois T, Wichert A (2007) Flexible kernels for RBF networks. Neurocomputing 69(16–18):2356–2359
20.
Zurück zum Zitat Sun YF, Liang YC, Zhang WL, Lee HP, Lin WZ, Cao LJ (2005) Optimal partition algorithm of the RBF neural network and its application to financial time series forecasting. Neural Comput Appl 14(1):36–44CrossRef Sun YF, Liang YC, Zhang WL, Lee HP, Lin WZ, Cao LJ (2005) Optimal partition algorithm of the RBF neural network and its application to financial time series forecasting. Neural Comput Appl 14(1):36–44CrossRef
21.
Zurück zum Zitat Guillen A, Pomares H, Rojas I, Gonzalez J, Herrera LJ, Rojas F, Valenzuela O (2008) Studying possibility in a clustering algorithm for RBFNN design for function approximation. Neural Comput Appl 17(1):75–89 Guillen A, Pomares H, Rojas I, Gonzalez J, Herrera LJ, Rojas F, Valenzuela O (2008) Studying possibility in a clustering algorithm for RBFNN design for function approximation. Neural Comput Appl 17(1):75–89
22.
Metadaten
Titel
Research on an online self-organizing radial basis function neural network
verfasst von
Honggui Han
Qili Chen
Junfei Qiao
Publikationsdatum
01.07.2010
Verlag
Springer-Verlag
Erschienen in
Neural Computing and Applications / Ausgabe 5/2010
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-009-0323-6

Weitere Artikel der Ausgabe 5/2010

Neural Computing and Applications 5/2010 Zur Ausgabe

Premium Partner