01.12.2019 | Research | Ausgabe 1/2019 Open Access

# The impact mechanism of rural land circulation on promoting rural revitalization based on wireless network development

## 1 Introduction

## 2 State of the art

## 3 Methodology

### 3.1 BP neural network

_{ij}. The neuron connection weight from the hidden layer to the output layer is w

_{jk}. The hidden layer threshold is designated θ

_{j}. It is assumed that the threshold θ

_{k}of the output layer neuron is a small value in the range [0,1]. (2) Determine the input vector x

_{i}= (x

_{1}, x

_{2}, ⋯, x

_{m}) of the input value. Calculate the matching expected output vector \( \overset{\wedge }{Y_i}=\left(\overset{\wedge }{Y_1},\overset{\wedge }{Y_2},\cdots, \overset{\wedge }{Y_n}\right) \). The value of the x

_{i}input vector is input to the neuron node of the input layer. According to the conditional formula \( {x}_j^i=f\left(\sum \limits_{i=0}^n{W}_{ij}{x}_i-{\theta}_j\right)\kern0.5em \left(j=1,2,\cdots, u\right) \), the positive-oriented flow calculation is performed, or the counter-oriented flow calculation is performed according to \( {y}_k=f\left(\sum \limits_{k=0}^n{V}_{jk}{x}_j-{\theta}_k\right)\kern0.5em \left(k=1,2,\cdots, n\right) \). (3) Calculate the error between the output layer neuron output value and the expected output value. If the error result meets expectations, the training is completed. If the difference is too large, it will enter the reverse calculation of the model calculation again. Finally, after repeated calculations of the correction function, the weights satisfying the requirements are obtained, the calculation of the model ends, and the signal output is performed. The BP neural network learning algorithm flow is shown in Fig. 2.

### 3.2 BP neural network optimization

_{c}is equal to zero, the weights are adjusted according to the gradient descent method. When m

_{c}= 1, the adjustment of the new weight value will be the same as the change value of the last weight, which will exclude the adjustment brought by the gradient descent method. After adding the momentum item into the calculation, the network weight value is in the smoother range at the bottom of the error surface. ∇f(w(n)) will become very small, so w(n + 1) ≈ w(n). In order to prevent w(n + 1) = 0 from appearing and help the network to jump out of the local minimum, the weight adjustment formula is optimized as formula (1).

_{c}is the momentum coefficient, ηB is the learning rate, and m

_{c}∈ [0, 1] is set. When the error accuracy is assumed to be 0.0008, an extra-kinetic energy method is used to calculate the non-fitting linear function, the learning rate is set at random, and the average of 30 calculations is taken. Thus, the curve between the kinetic energy coefficient and the learning time is obtained, as shown in Fig. 3. It can be seen in the figure that after the introduction of the momentum item, the speed of network learning has been improved. This adjustment method can effectively avoid the occurrence of local minimum values of the network and reduce the recurrence of errors.

## 4 Result analysis and discussion

^{(− 10)}. In order to avoid overfitting in the experiment, the network model’s prediction ability is reduced and the generalization ability is not good. Three times cross validation are used, that is, all the prediction results are the average values after cross-validation.

Particular year | Predicted value (%) | Real value (%) | Relative error | |||
---|---|---|---|---|---|---|

Managers’ proportion | The proportion of researchers | Managers’ proportion | The proportion of researchers | Managers’ proportion | The proportion of researchers | |

2016 | 3.35 | 62.21 | 3.31 | 61.79 | 0.56 | 0.04 |

2017 | 3.22 | 64.34 | 3.42 | 64.05 | 0.61 | 0.52 |