main-content

## Weitere Artikel dieser Ausgabe durch Wischen aufrufen

01.04.2014 | Ausgabe 3/2014 Open Access

# Wireless link prediction and triggering using modified Ornstein–Uhlenbeck jump diffusion process

Zeitschrift:
Wireless Networks > Ausgabe 3/2014
Autoren:
Eric Chin, David Chieng, Victor Teh, Marek Natkaniec, Krzysztof Loziak, Janusz Gozdecki

## 1 Introduction

Monitoring of a wireless link is a tough challenge due to the nature of the wireless link which is constantly affected by interference and temporary fading. An efficient and reliable network monitoring system is generally expected to collect information regarding current system configuration, observe current values of parameters influencing performance metrics, detect abnormal behaviour of a node or link and in some cases, predict the performance degradation events. Accurate and timely prediction is critical to ensure that there is sufficient time for mitigation actions such as self (re)configuration or healing to take place [ 3]. In wireless multi-hop [ 4] or mesh [ 5] environment, the behaviour of links particularly those which are closer to a gateway is becoming the primary concern as they carry the traffic of other nodes further down the hops. Through time domain observation typical wireless signals’ signal strength seem to exhibit some forms of mean-reverting behaviour (converging towards a long term mean) as well as discontinuous “jumps” (missing values for a certain period of time). Therefore it is natural for us to look at models with these properties. One such stochastic model which we are considering in this paper is the Ornstein–Uhlenbeck (OU) diffusion process which was first applied in physics [ 7] to describe Brownian motion of particles suspended in a fluid with friction. In this paper due to the inherent jump properties of wireless signal strength or received signal strength indicator (RSSI) values, we propose the modelling of this behaviour using a modified Ornstein–Uhlenbeck jump diffusion process. The proposed technique is an integral part of the monitoring system developed by ICT FP7 CARMEN (CARrier grade wireless MEsh Network) project [ 1, 2, 6, 8].
The remainder of this paper is organised as follows. Section  2 discusses the related works and motivations behind this research. Section  3 presents the modelling, and calibration of the proposed method and the novel link prediction and triggering algorithms. Section  4 shows the OU-LPT analysis. It also demonstrates the design and real-time implementation of OU-LPT technique. Finally conclusions are drawn in Sect. 5.

## 2 Related work

As revealed in [ 10], it is understood that though RSSI or SNR alone is good enough for a single link, it may not achieve sufficient accuracy in deciding the end-to-end or network wide transmission quality. In such situation, other cross layer metrics such as (MAC/IP layer) latency, throughput and loss may provide a more accurate means to evaluate the quality of a link. Metrics such as expected transmission count (ETX) and expected transmission time (ETT) have been widely proposed to support routing decision in wireless mesh [ 16]. However these metrics depend very much on the types of application and hence pose additional complexities when performing prediction. First and foremost, the monitoring system would need to know exact traffic pattern of the sender, then there is a foreseen challenge on trigger timeliness since a specific period is required to collect, compute and analyze cross-layer frame information. In this paper therefore, we only focus on the SNR/RSSI as it generally provides a reasonably good indication on the quality of the link without having to know the traffic characteristics, patterns or distribution. Despite saying that, the proposed link prediction and triggering technique can be applied on any desired metric such as throughput, delay, jitter or loss rate as long as it exhibits some forms of mean-reverting behaviour and discontinuous “jump”.

## 3 Link prediction and triggering with OU diffusion process (OU-LPT)

To make a reliable forecast of local and neighbouring mesh links, we propose a diffusion process models for a selected window size of a series of RSSI values. The prediction can be applied to any channel info received from the neighbouring radios. Instead of using statistical time series modelling which involves comprehensive model identification process and then parameters estimation that are numerically intensive, we propose a much more simpler and effective way to estimate diffusion process model parameters from historical data.

### 3.1 Ornstein–Uhlenbeck jump diffusion process

Figure  1 shows both the time series of raw and smoothed RSSI values of a typical WLAN link with a time step of 100 ms. The latter is smoothen via moving average method with an average of 10 values in order to reduce short term fluctuations. By observing the time domain behaviour, the RSSI data seems to exhibit some forms of mean-reverting behaviour as well as discontinuous “jumps”, therefore it is natural for us to look at one such stochastic model such as the Ornstein–Uhlenbeck (OU) diffusion process.
We propose the modelling of such behaviour using a modified mean-reverting diffusion process called the OU jump diffusion process, OU( κ, θ, σ, J), defined by the stochastic differential equation (SDE):
$$dX_{t} = \kappa \left( {\theta - X_{t} } \right)dt + \sigma \,dW_{t} + \log J_{t} \,dN_{t}$$
(1)
where $$dW_{t} \sim N\left( {0,dt} \right)$$ is a Wiener process, κ > 0 is the mean reversion rate, θ is the mean and σ > 0 is the volatility.
The process dN t is a Poisson process with parameter λ such that
$$dN_{t} = \left\{ {\begin{array}{*{20}c} 1 \hfill & {\text{with probability}} \hfill & {\lambda \,dt} \hfill \\ 0 \hfill & {\text{with probability}} \hfill & {1 - \lambda \,dt} \hfill \\ \end{array} } \right.$$
The random variable J t  > 0 is the jump amplitude with $$\log J_{t} \sim N\left( {\mu_{J} ,\,\sigma_{J}^{2} } \right)$$, and $$dW_{t} ,\,dN_{t}$$ and J t are mutually independent. Using the analysis given by [ 18] for each forecast step ahead $$\ell \,\, \ge \,\,1$$ and a constant time step Δ t we can solve the SDE as
$$X_{t + \ell \Updelta t} = X_{t} e^{ - \kappa \ell \Updelta t} + \left( {\theta + \frac{{\lambda \mu_{J} }}{\kappa }} \right)\left( {1 - e^{ - \kappa \ell \Updelta t} } \right) + \sigma \sqrt {\frac{{1 - e^{ - 2\kappa \ell \Updelta t} }}{2\kappa }} Z_{1} + \sqrt {\frac{\lambda }{2\kappa }\left( {\mu_{J}^{2} + \sigma_{J}^{2} } \right)} Z_{2}$$
(2)
and given X t it has expectation $$E\left( {X_{t + \ell \Updelta t} } \right) = X_{t} e^{ - \kappa \ell \Updelta t} + \left( {\theta + \frac{{\lambda \mu_{J} }}{\kappa }} \right)\left( {1 - e^{ - \kappa \ell \Updelta t} } \right)$$, and variance $$Var\left( {X_{t + \ell \Updelta t} } \right) = \sigma^{2} \left( {\frac{{1 - e^{ - 2\kappa \ell \Updelta t} }}{2\kappa }} \right) + \frac{\lambda }{2\kappa }\left( {\mu_{J}^{2} + \sigma_{J}^{2} } \right)$$ where Z 1, Z 2 ~  N(0,1) and Z 1, Z 2 are independent.

### 3.2 Model calibration

In order to calibrate the parameters κ, θ, σ, λ, μ J and σ J , we can subdivide the OU jump diffusion process as follows:
$$dX_{t} = \left\{ {\begin{array}{*{20}c} {\kappa (\theta - X_{t} )dt + \sigma dW + \log J_{t} \,dN_{t} } \hfill & {{\text{if}}\,\,{\text{Poisson}}\,{\text{event}}\,{\text{occurs}}} \hfill \\ {\kappa (\theta - X_{t} )dt + \sigma dW_{t} } \hfill & {{\text{if}}\,\,{\text{Poisson}}\,{\text{event}}\,{\text{does}}\,{\text{not}}\,{\text{occurs}}} \hfill \\ \end{array} } \right.$$
To begin with, the jump diffusion model in its simplest form needs an estimate of probability of jump, measured by λ, and its size J t . However, this can be made more complicated by having a distribution for J t . Given that we have an array of parameters to estimate and if we were to set up a maximum likelihood method for the full model (mixing jumps and diffusion), it may be hard for the algorithm to distinguish what are jumps, and what are diffusions. Hence there is a need for us to subdivide the parameter estimation of jump components and mean reversion diffusion process into two parts.
As seen from the normal probability plot of returns r t  =  X tt  −  X t where Δt = 0.1 s in Fig.  2, the existence of fat tails suggest the probability of rare events occurring is higher than predicted by a Gaussian distribution. In addition from the scatter plot of X tt against X t , a linear fit to describe their interaction is inappropriate with the presence of jumps. In addition the histogram of the returns together with a fitted normal density (see Fig.  3) shows there is a significant existence of fat tails in which the returns data are not normally distributed.
In order to extract the jump components from a series of returns $$r_{t} \,\, = \,\,X_{t + \Updelta t} \, - \,X_{t}$$, we can use the following pseudo-code:
 Begin Set R = { r 1, r 2, …, r N } and its complement R C  =  ϕ where N is the number of observations. Repeat • Find the mean $$\bar{r}$$ and standard deviation σ r of the set R • For all elements in the set R, filter out the return r t if $$\left| {r_{t} - \bar{r}} \right| > 3\sigma_{r} .$$ Set the filtered out set R C  =  R C ∪{ r t } and R =  R – { r t } Until no further returns are filtered. End
From the output of the filtered set R C , we can estimate the frequency of jumps $$\hat{\lambda }$$, mean $$\hat{\mu }_{J}$$ and variance $$\hat{\sigma }_{J}^{2}$$ as
$$\hat{\lambda } = \left( {\frac{{\left| {R^{C} } \right|}}{{\left| R \right| + \left| {R^{C} } \right|}}} \right)\Updelta t,\hat{\mu }_{J} = \bar{r}^{C} \,{\text{and}}\,\hat{\sigma }_{J}^{2} = \sigma_{{r^{C} }}^{2}$$
(3)
where $$\left| {R^{C} } \right|$$ and $$\left| R \right|$$ are the cardinal numbers of the filtered out series R C and the filtered series R respectively, $$\bar{r}^{C}$$ and $$\sigma_{{r^{C} }}^{2}$$ are the mean and variance of the set R C respectively.
Once we have extracted out the jump components from the original series, we can see from the normal probability plot of filtered returns in Fig.  5 that there is a high proportion of residuals being on the straight line passing through zero. The histogram also shows that most of data that constitute fat tails have been removed. This shows that the normal plot of filtered residuals is well behaved.
In addition from the scatter plot of filtered X tt against X t in Fig.  4, we can deduce that there is a strong linear relationship between them, and hence we can fit a linear model to describe their interaction. In order to estimate the remaining mean reversion parameters we can use a subset of the filtered series { X t } to estimate the parameters for each time step. Having identified which part of the original series have jump components or statistically significant jumps, we can then extract out a subset of the original series where the returns are continuous which follow an OU process given as:
$$dX_{t} = \kappa (\theta - X_{t} )dt + \sigma dW_{t}$$
(4)
Taking note that
$$d(e^{\kappa t} X_{t} ) = \kappa e^{\kappa t} X_{t} dt + e^{\kappa t} dX_{t} + \frac{1}{2}\kappa^{2} e^{\kappa t} X_{t} (dt)^{2} + \cdots$$
(5)
and from Ito’s lemma [ 19] such that
$$(dX_{t} )^{2} = \sigma^{2} dt,\,(dt)^{\nu } = o(1),\,\nu > 1$$
(6)
for any arbitrary time step Δ t > 0, the Ornstein–Uhlenbeck process has a unique solution
$$X_{t + \Updelta t} = X_{t} e^{ - \kappa \Updelta t} + \theta (1 - e^{ - \kappa \Updelta t} ) + \sigma \sqrt {\frac{{1 - e^{ - 2\kappa \Updelta t} }}{2\kappa }}Z$$
(7)
where Z ~  N(0,1) follows a standard normal distribution.
In this study, rather than accurately finding the parameter values using expensive maximum likelihood estimation method we can instead rely on simple regression analysis. As shown in Fig.  4, we can see that there is a strong linear relationship between X t+1 and X t (we take Δ t = 0.1 s) for all t values. Hence the first step in our parameter estimation using regression analysis is to find the best fit of the RSSI time series { X t } to its past values in order to make future forecasts.
To begin with by taking N > 2 to be the size of the window for the series of data and Δ t be the step size, we let the relationship between consecutive RSSI values X t , X tt , X t+2Δt , …, X T
$$X_{t + \Updelta t} = aX_{t} + b + \varepsilon_{t} ,\,\varepsilon_{t} \sim N(0,\sigma_{\varepsilon }^{2} )$$
(8)
where T =  t+ NΔ t, a and b are the regression parameters, ε t is normally distributed and is independent and identically distributed.
By comparing the relationship between the linear fit and the OU process model, the parameters can be equated as
$$a = e^{ - \kappa \Updelta t} ,\,b = \theta (1 - e^{ - \kappa \Updelta t} ),\,\sigma_{\varepsilon } = \sigma \sqrt {\frac{{1 - e^{ - 2\kappa \Updelta t} }}{2\kappa }}$$
(9)
Given we require κ > 0 and provided a ∈ (0,1) we can then set the jump-diffusion process model with the formulas
$$\theta = \frac{b}{1 - a},\,\kappa = -\frac{\log a}{\Updelta t},\,\sigma = \sigma_{\varepsilon } \sqrt {\frac{ - 2\log a}{{(1 - a^{2} )\Updelta t}}}$$
(10)
In order to find the optimal values $$\hat{a}$$ and $$\hat{b}$$ we can solve the following least-squares regression optimization problem subject to a bound constraint:
$$P\left\{ {\begin{array}{*{20}c} {\hbox{min} {\text{imize}}} \hfill & {\sum\limits_{i = t + \Updelta t}^{T} {\left( {X_{i} - aX_{i - \Updelta t} - b} \right)^{2} } } \hfill \\ {a,b} \hfill & {} \hfill \\ {\text{subject to}} \hfill & {0 < a < 1.} \hfill \\ \end{array} } \right.$$
Instead of using computationally expensive optimization subroutines such as L-BFGS-B method [ 17] to solve problem P iteratively, we can utilise least-squares regression analysis by first calculating the following quantities
$$S_{x} = \sum\limits_{i = t + \Updelta t}^{T} {X_{i - \Updelta t} } ,\,S_{y} = \sum\limits_{i = t + \Updelta t}^{T} {X_{i} } ,\,S_{xx} = \sum\limits_{i = t + \Updelta t}^{T} {X_{i - \Updelta t}^{2} } ,\,S_{xy} = \sum\limits_{i = t + \Updelta t}^{T} {X_{i - \Updelta t} X_{i} } ,\,S_{yy} = \sum\limits_{i = t + \Updelta t}^{T} {X_{i}^{2} }$$
(11)
and set
$$\hat{a}_{0} \,\, = \,\,\frac{{N\,S_{xy} \,\, - \,\,S_{x} S_{y} }}{{N\,S_{xx} \,\, - \,\,S_{x}^{2} }}$$
(12)
Hence the optimal values $$\hat{a}$$ and $$\hat{b}$$ can be estimated as follows:
$$\hat{a} = \left\{ {\begin{array}{*{20}c} {\hat{a}_{0} } \hfill & {{\text{if }}0 < \hat{a}_{0} < 1} \hfill & {} \hfill \\ \varepsilon \hfill & {{\text{if }}\hat{a}_{0} \le 0} \hfill & {{\text{and}}\,\,\hat{b} = \frac{{S_{y} - \hat{a}S_{x} }}{N}} \hfill \\ {1 - \varepsilon } \hfill & {\text{otherwise}} \hfill & {} \hfill \\ \end{array} } \right.$$
(13)
where $$\varepsilon \,\, \in \,\,(0,\,\,1)$$.
In addition under the assumption that the error term has a constant variance, once we have found the optimal values $$\hat{a}$$ and $$\hat{b}$$, the estimated standard deviation of the error term is
$$\hat{\sigma }_{\varepsilon } = \sqrt {\frac{{NS_{yy} - S_{y}^{2} - \hat{a}(NS_{xy} - S_{x} S_{y} )}}{N(N - 2)}}$$
(14)
Note that if κΔ t   1 then we can approximate
$$a \approx 1 - \kappa \Updelta t,\,b \approx \theta \kappa \Updelta t,\,\sigma_{\varepsilon } \approx \sigma \sqrt {\Updelta t}$$
(15)
with errors of order O(( κΔ t) 2). Hence the mean reversion model parameters can be approximated as
$$\hat{\theta } = \frac{{\hat{b}}}{{1 - \hat{a}}},\,\hat{\kappa } = \frac{{1 - \hat{a}}}{\Updelta t},\,\hat{\sigma } = \frac{{\hat{\sigma }_{\varepsilon } }}{{\sqrt {\Updelta t} }}$$
(16)
Collectively we can then write
$$\hat{X}_{t + \Updelta t} = X_{t} e^{{ - \hat{\kappa }\Updelta t}} + \left( {\hat{\theta } + \frac{{\hat{\lambda }\hat{\mu }_{J} }}{{\hat{\kappa }}}} \right)\left( {1 - e^{{ - \hat{\kappa }\Updelta t}} } \right) + \hat{\sigma }\sqrt {\frac{{1 - e^{{ - 2\hat{\kappa}\Updelta t}} }}{{2\hat{\kappa }}}}Z_{1} + \sqrt {\frac{{\hat{\lambda}}}{{2\hat{\kappa }}}\left( {\hat{\mu }_{J}^{2} + \hat{\sigma}_{J}^{2} } \right)}Z_{2}$$
(17)
where Z 1, Z 2 ~  N(0,1) and Z 1, Z 2 are independent.

### 3.3 Prediction algorithm

Once the parameter values $$\hat{\kappa }$$, $$\hat{\theta }$$, $$\hat{\sigma }$$, $$\hat{\lambda }$$, $$\hat{\mu }_{J}$$ and $$\hat{\sigma }_{J}$$ are estimated, we can then deduce the estimated forecast $$\hat{X}_{t + \Uplambda \ell }$$ follows
$$\frac{{\hat{X}_{t + \ell \Updelta t} \,\, - \,\,E\left( {\hat{X}_{t + \ell \Updelta t} } \right)}}{{\sqrt {Var\left( {\hat{X}_{t + \ell \Updelta t} } \right)} }}\,\,\sim \,\,N\left( {0,\,\,1} \right)$$
(18)
Assuming the current mesh node has the knowledge on all (or some of) the neighbouring mesh nodes’ channel’s RSSI of which it can form a link with. The mesh node would only issue a trigger when its present forecasted RSSI value falls below its threshold value, and the forecasted RSSI value of a target neighbouring mesh node exceeds its threshold value. By denoting the neighbouring mesh nodes RSSI values as $$Y_{t}^{(i)}$$ where i-1, 2,…, M, where M is the total number of neighbouring mesh nodes (or mesh radios in multi-radio case), the current mesh node would only issue a trigger when it current link’s RSSI
$$\hat{X}_{t + \ell \Updelta t} \le \bar{X}\,\,{\text{and}}\,\,\hat{Y}_{t + \ell \Updelta t}^{(j)} > \bar{Y}^{(j)}$$
(19)
where the index j is defined as
$$j = \{ i:\hbox{max} \{ \hat{Y}_{t + \ell \Updelta t}^{(i)} - \bar{Y}^{(i)} ,\,0\} ,\quad i = 1,2, \ldots ,M\}$$
(20)
where $$\bar{X}$$ is the current link RSSI threshold representing the minimal QoS it must support in order to operate successfully, $$\bar{Y}^{(i)}$$ is the ith neighbouring RSSI threshold value and $$\hat{Y}_{t + \ell \Updelta t}^{(j)}$$ is the predicted RSSI value of the jth neighbouring mesh node if which it could form a new link with. The criteria given in ( 19)–( 20) denotes that the OU-LPT method would only choose the “best” neighbouring mesh node. On the other hand if there is no better mesh node, then the scheme will not trigger a link handover event.
The Link Going Down (LGD) event is introduced to help wireless nodes to prepare for link handover or switching prior to Link Down (LD) so that switching delays and service interruptions can be minimized. Based on the forecasted RSSI values of the current link and in order to minimize the error of decision making, wireless card manufacturers like Intel [ 20] would introduce a protection margin for LGD (or hysteresis factor) $$\Updelta_{x}^{GD} \ge 0$$. The purpose of having this protection margin is to augment it to the RSSI threshold value, $$\bar{X}$$ so that the current link has an enhanced threshold value, $$\bar{X} + \Updelta_{x}^{GD}$$ to ensure a better QoS. If the forecasted RSSI value is greater than the enhanced threshold value, then the system would not trigger a link handover to another mesh node. In the following Table  1 we list the trigger thresholds that are being used in this report where $$\Updelta_{x}^{U} > \Updelta_{x}^{CU} > \Updelta_{x}^{GD} > 0.$$
Table 1
Thresholds for link handover trigger
 Link-Up threshold (LU_TH) $$\bar{X} + \Updelta_{x}^{U}$$ Link-coming-up threshold (LCU_TH) $$\bar{X} + \Updelta_{x}^{CU}$$ Link-going-down threshold (LGD_TH) $$\bar{X} + \Updelta_{x}^{GD}$$ Link-down threshold (LD_TH) $$\bar{X}$$
With this protection margin $$\Updelta_{x}^{GD}$$, and for a forecasted RSSI value $$\bar{X}_{t + \ell \Updelta t}$$ the probability in making a trigger is defined as
$$P(\hat{X}_{t\,\, + \,\,\ell \Updelta t} \,\, \le \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} )\,\, = \,\,P\left( {Z\,\, \le \,\,\frac{{\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\, - \,\,E\left( {\hat{X}_{t + \ell \Updelta t} } \right)}}{{\sqrt {Var\left( {\hat{X}_{t + \ell \Updelta t} } \right)} }}} \right)$$
(21)
and if
$$E\left( {\hat{X}_{t + \ell \Updelta t} } \right)\le \bar{X} + \Updelta_{x}^{GD} \,{\text{and}}\, P\left( {\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right)\ge \alpha$$
(22)
where α ∈ (0,1) is a margin error then the current mesh node will issue a trigger at time t to initiate a link handover to an alternative mesh node or radio.
In addition, for the forecasted RSSI values of neighbouring mesh nodes and for each of the ith node we also introduce a protection margin $$\Updelta_{y}^{(i)} \ge 0$$ so as to minimize the error of false selection of a node for handover. By analogy with the probability of making a trigger for the mesh link, for each neighbouring mesh radios, we define the probability of selecting a new node:
$$P\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} \ge \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right) = P\left( {Z \ge \frac{{\bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} - E\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} } \right)}}{{\sqrt {Var\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} } \right)} }}} \right),\quad i = 1,2, \ldots ,M$$
(23)
and if
$$P\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} \ge \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right) \ge \beta$$
(24)
where  β (0,1) is a margin error, then the ith n can be selected to be the link handover target. By augmenting a protection margin to our OU-LPT method we can now redefine our criterion of a handover from a current mesh node to the jth mesh node at time t as:
$$\left\{ {\begin{array}{*{20}c} {E\left( {\hat{X}_{t + \ell \Updelta t} } \right) \le \bar{X} + \Updelta_{x}^{GD} \,and} \hfill \\ {P\left( {\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right) \ge \alpha } \hfill \\ \end{array} } \right\}\,{\text{and}}\,\left\{ {\begin{array}{*{20}c} {E\left( {\hat{Y}_{t + \ell \Updelta t}^{(j)} } \right) \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} \,and} \hfill \\ {P\left( {\hat{Y}_{t + \ell \Updelta t}^{(j)} \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right) \ge \beta } \hfill \\ \end{array} } \right\}$$
(25)
where the index j is defined as
$$j = \left\{ {i:\hbox{max} \left\{ {P\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} \ge \bar{Y}^{(i)} + \Updelta_{{y^{(j)} }}^{CU} } \right),\quad i = 1,2, \ldots ,M} \right\}} \right\}$$
(26)
By analogy with statistical hypothesis testing, the procedure described above would lead us to commit a false trigger (or false positive) error. With this protection margin Δ x , and for a forecasted RSSI value $$\hat{X}_{t + \ell \Updelta t}$$, here we define the probability in making a false trigger (or false alarm) at time t as
$$P\left( {\left. {\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right|X_{t\ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right)$$
(27)
where it is the error of committing a false trigger when the true RSSI value $$X_{t + \ell \Updelta t}$$ is greater than the enhanced threshold requirement but the forecasted RSSI value, $$\hat{X}_{t + \ell \Updelta t}$$ shows that it is lower than the threshold value plus the protection margin. From ( 22) we can deduce via Kolmogorov–Smirnov goodness-of-fit test that the residuals of the smoothed and fitted RSSI values $$\hat{\varepsilon }_{t} = X_{t} - \hat{X}_{t}$$ follow
$$\hat{\varepsilon }_{t} = X_{t} - \hat{X}_{t} \sim N\left( {\mu_{{\hat{\varepsilon }}} ,\,\sigma_{{\hat{\varepsilon }}}^{2} } \right)$$
(28)
where $$E(\hat{\varepsilon }_{t} ) = \mu_{{\hat{\varepsilon }}}$$ and $$Var(\hat{\varepsilon }_{t} ) = \sigma_{{\hat{\varepsilon }}}^{2}$$. Hence we can write that P( false trigger) =
$$P(\hat{X}_{t\,\, + \,\,\ell \Updelta t} \,\, \le \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\,|\,\,X_{t\,\, + \,\,\ell \Updelta t} \,\, > \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} ) = \,\frac{{\int_{\,\, - \infty }^{{\,\,x\,\, = \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} }} {\left[ {\,1\,\, - \,\,\Upphi \left( {\frac{{\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\, - \,\,x\,\, - \,\,\mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)} \right]} \,\,f_{{\hat{X}}} \left( x \right)\,\,dx\,}}{{1\,\, - \,\,\int_{\,\, - \infty }^{\,\,\,\infty } {\Upphi \left( {\frac{{\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\, - \,\,x\,\, - \,\,\mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)\,f_{{\hat{X}}} \left( x \right)\,\,dx} }}\,$$
(29)
where Z ~  N(0,1), $$\Upphi ( \cdot )$$ denotes the cumulative standard normal distribution function and $$f_{{\hat{X}}} (x) = \frac{1}{{\sigma_{{\hat{X}_{t + \ell \Updelta t} }} \sqrt {2\pi } }}e^{{ - \frac{1}{2}\left( {\frac{{x - \mu_{{\hat{X}_{t + \ell \Updelta t} }} }}{{\sigma_{{\hat{X}_{t + \ell \Updelta t} }} }}} \right)^{2} }}$$ is the probability density function (pdf) of the forecasted RSSI values. On the other hand we can also define the probability of making a false non-trigger (or missed trigger) as
$$P(\hat{X}_{t\,\, + \,\,\ell \Updelta t} \,\, > \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\,|\,\,X_{t\,\, + \,\,\ell \Updelta t} \,\, \le \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} ) = \,\frac{{\int_{{x\, = \,\,\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\,}}^{\infty } {\Upphi \left( {\frac{{\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\, - \,\,x\,\, - \,\,\mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)} \,f_{{\hat{X}}} \left( x \right)\,\,dx\,}}{{\int_{\,\, - \infty }^{\,\,\,\infty } {\Upphi \left( {\frac{{\bar{X}\,\, + \,\,\Updelta_{x}^{GD} \,\, - \,\,x\,\, - \,\,\mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)\,f_{{\hat{X}}} \left( x \right)\,\,dx} }}\,$$
(30)
which is the error when the true RSSI value $$X_{t + \ell \Updelta t}$$ is less than the enhanced threshold requirement but the forecasted RSSI value, $$\hat{X}_{t + \ell \Updelta t}$$ shows that it is greater than the threshold value plus the protection margin. For a complete derivation of these two results, please refer to the “ Appendix”.
In addition, for the forecasted RSSI values of neighbouring mesh nodes, by analogy with the probabilities of making a false trigger of the current mesh node, for each neighbouring mesh node, we define the probability of making false selection at time t of an ith node as
$$P\left( {\left. {\hat{Y}_{t + \ell \Updelta t}^{(i)} > \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right|Y_{t + \ell \Updelta t}^{(i)} \le \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right),\quad i = 1,2, \ldots ,M$$
(31)
where $$\hat{Y}_{t + \ell \Updelta t}^{(i)}$$ and $$\bar{Y}^{(i)}$$ are the ith mesh node’s forecasted RSSI value for leads $$\ell \ge 1$$ and its RSSI threshold value respectively. Furthermore by deducing the residuals $$Y_{t}^{(i)} - \hat{Y}_{t}^{(i)}$$ as
$$\hat{\varepsilon }_{t}^{(i)} = Y_{t}^{(i)} - \hat{Y}_{t}^{(i)} \sim N\left( {\mu_{\varepsilon }^{(i)} ,\,\left( {\sigma_{\varepsilon }^{(i)} } \right)^{2} } \right)$$
(32)
where $$E\left( {\hat{\varepsilon }_{t}^{(i)} } \right) = \mu_{{\hat{\varepsilon }}}^{t}$$, $$Var(\hat{\varepsilon }_{t}^{(i)} ) = \left( {\sigma_{{\hat{\varepsilon }}}^{(i)} } \right)^{2}$$ and $$Y_{t}^{(i)}$$ is the smoothed RSSI value at time t for ith neighbouring mesh node. Hence in analogy with ( 31) we can write that P( false node selection) =
$$P(\hat{Y}_{t\,\, + \,\,\ell \Updelta t}^{(i)} \,\, > \,\,\bar{Y}_{{}}^{(i)} \,\, + \,\,\Updelta_{{y^{\left( i \right)} }}^{CU} \,\,|\,\,Y_{t\,\, + \,\,\ell \Updelta t}^{(i)} \,\, \le \,\,\bar{Y}_{{}}^{(i)} \,\, + \,\,\Updelta_{{y^{\left( i \right)} }}^{CU} \,)\, = \,\frac{{\int_{{\,\,y\,\, = \,\,\bar{Y}_{{}}^{(i)} \,\, + \,\,\Updelta_{{y^{\left( i \right)} }}^{CU} }}^{\,\infty \,} {\Upphi \left( {\frac{{\bar{Y}^{(i)} \,\, + \,\,\Updelta_{{y^{\left( i \right)} }}^{CU} \,\, - \,\,y\,\, - \,\,\mu_{{\hat{\varepsilon }}}^{(i)} }}{{\sigma_{{\hat{\varepsilon }}}^{(i)} }}} \right)} \,\,f_{{\hat{Y}^{(i)} }} \left( y \right)\,\,dy\,}}{{\int_{\,\, - \infty }^{\,\,\,\infty } {\Upphi \left( {\frac{{\bar{Y}^{(i)} \,\, + \,\,\Updelta_{{y^{\left( i \right)} }}^{CU} \,\, - \,\,y\,\, - \,\,\mu_{{\hat{\varepsilon }}}^{(i)} }}{{\sigma_{{\hat{\varepsilon }}}^{(i)} }}} \right)\,\,f_{{\hat{Y}^{(i)} }} \left( y \right)\,\,dy\,} }}$$
(33)
where Z ~  N(0,1), $$\Upphi ( \cdot )$$ denotes the cumulative standard normal distribution function and $$f_{{\hat{Y}^{(i)} }} (y) = \frac{1}{{\sigma_{{\hat{Y}_{t + \ell \Updelta t}^{(i)} }} \sqrt {2\pi } }}e^{{ - \frac{1}{2}\left( {\frac{{y - \mu_{{\hat{Y}_{t + \ell \Updelta t}^{(i)} }} }}{{\sigma_{{\hat{Y}_{t + \ell \Updelta t}^{(i)} }} }}} \right)^{2} }}$$ is the probability density function (pdf) of the forecasted RSSI values of the neighbouring ith mesh node with mean $$\mu_{{\hat{Y}_{t + \ell \Updelta t}^{(i)} }}$$ and variance $$\sigma_{{\hat{Y}_{t + \ell \Updelta t}^{(i)} }}^{2}$$. Hence we can now redefine our criterion of a handover at time t from a current mesh node to the jth mesh node as:
$$\left\{ {\begin{array}{*{20}c} {E\left( {\hat{X}_{t + \ell \Updelta t} } \right) \le \bar{X} + \Updelta_{x}^{GD} \,and} \\ {P\left( {\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right) \ge \alpha \,\,and} \\ {P\left( {\left. {\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right|X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right) \le \bar{\alpha }} \\ \end{array} } \right\}$$
(34)
and
$$\left\{ {\begin{array}{*{20}c} {E\left( {\hat{Y}_{t + \ell \Updelta t}^{(j)} } \right) \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} \,and} \\ {P\left( {\hat{Y}_{t + \ell \Updelta t}^{(j)} \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right) \ge \beta \,\,and} \\ {P\left( {\left. {\hat{Y}_{t + \ell \Updelta t}^{(j)} \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right|\hat{Y}_{t + \ell \Updelta t}^{(j)} < \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right) \le \bar{\beta }} \\ \end{array} } \right\}$$
(35)
where $$\bar{\alpha },\,\bar{\beta } \in \,(0,1)$$ and the index j is defined as
$$j = \left\{ {i:\hbox{max} \left\{ {P\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} \ge \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right),\quad i = 1,2, \ldots ,M} \right\}} \right\}$$
(36)
In order to reduce the probability of making false trigger and the probability of selecting the wrong AP to a wider margin, we can modify the above decision criteria to the following scheme:
Trigger from a current mesh node to the jth mesh node at time t when
$$\left\{ {\begin{array}{*{20}c} {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {E\left( {\hat{X}_{t + i\Updelta t} } \right) \le \bar{X} + \Updelta_{x}^{GD} \,and} } \\ {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {P\left( {\hat{X}_{t + i\Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right) \ge \alpha \,\, and} } \\ {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {P\left( {\hat{X}_{t + i\Updelta t} \le \bar{X} + \Updelta_{x}^{GD} \left| {X_{t + i\Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right.} \right) \le \bar{\alpha }} } \\ \end{array} } \right\}$$
(37)
and
$$\left\{ {\begin{array}{*{20}c} {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {E\left( {\hat{Y}_{t + i\Updelta t}^{(j)} } \right) \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} \,and} } \\ {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {P\left( {\hat{Y}_{t + i\Updelta t}^{(j)} \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right) \ge \beta \,\, and} } \\ {\frac{1}{m}\sum\limits_{i = \ell }^{\ell + m - 1} {P\left( {\hat{Y}_{t + i\Updelta t}^{(j)} \ge \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} \left| {\hat{Y}_{t + i\Updelta t}^{(j)} < \bar{Y}^{(j)} + \Updelta_{{y^{(j)} }}^{CU} } \right.} \right) \le \bar{\beta }} } \\ \end{array} } \right\}$$
(38)
where m ≥ 1, $$\bar{\alpha },\,\bar{\beta } \in \,(0,1)$$ and the index j is defined as
$$j = \left\{ {i:\hbox{max} \left\{ {P\left( {\hat{Y}_{t + \ell \Updelta t}^{(i)} \ge \bar{Y}^{(i)} + \Updelta_{{y^{(i)} }}^{CU} } \right),\, i = 1,2, \ldots ,M} \right\}} \right\}$$
(39)

### 3.4 Trigger algorithm

Based on the analysis so far, the following is the proposed algorithm in the form of a pseudo-code:
Given the parameter values $$\ell$$, m, α, $$\bar{\alpha }$$, β, $$\bar{\beta }$$, $$\bar{X}$$, $$\bar{Y}^{(i)}$$, $$\Updelta_{x}^{GD}$$, $$\Updelta_{{y^{(i)} }}^{CU}$$, i = 1, 2, …, M
Step 1.
Select a window size N from the latest smoothed RSSI values $$\left\{ {X_{i} } \right\}_{i = 1}^{N}$$ of the current mesh node and also $$\left\{ {Y_{j}^{(i)} } \right\}_{j = 1}^{{N^{(i)} }}$$ for each M neighbouring mesh nodes with their respective window size N (i), i = 1, 2, …, M
Step 2.
Extract out the jump-components from $$\left\{ {X_{i} } \right\}_{i = 1}^{N}$$ and $$\left\{ {Y_{j}^{(i)} } \right\}_{j = 1}^{{N^{(i)} }}$$, i = 1, 2, …, M and estimate the OU jump diffusion process model parameters
Step 3.
Forecast smoothed RSSI values for lead time $$\ell \ge \Updelta t$$ for all current and neighbouring mesh nodes
Step 4.
If
Step 5.
Update the latest RSSI values and return to Step 1.

## 4 Analysis, design and implementation of OU-LPT technique

In the previous chapter we proposed the OU jump diffusion algorithm based on modified mean-reverting diffusion process. It allows for a reliable forecast of RSSI values of local and neighbouring mesh links. This chapter contains some analyses of the proposed solution in Matlab simulation as well as in a real-time experimental testbed. The datasets adopted in our analyses represent two distinct environments namely the indoors and the outdoors. Further experimentations with different datasets may result in different levels of improvement but for initial proof of concept of our proposed OU-LPT, the existing datasets are believe to be sufficient to provide some valuable insights on what this technique may offer.

### 4.1 OU-LPT simulation analysis

In this section we evaluate the performance of the proposed OU-LPT technique using real RSSI data (courtesy from Intel and Fraunhofer FOKUS) using Matlab. For the Intel dataset, RSSI values of beacon frames were measured in an indoor office environment between a laptop and an IEEE 802.11g Access Point with transmission power of 15 dBm. The laptop moved with speed of approximately 0.5 m/s around the office. The Fraunhofer dataset on the other hand, were measured outdoor (open field) between two stationery wireless mesh nodes 50 m apart. Each node was equipped with IEEE802.11g radio card with transmit power of 14 dBm.
In this analysis we strictly follow the criteria set by Intel [ 20] in defining the LD and LGD thresholds using its RSSI. Here the LD threshold value is set at −80 dBm and LGD threshold is set at −76 dBm which results in a protection margin, $$\Updelta_{x}^{GD}$$of 4 dB. As the RSSI values do not seem to exhibit any trends or seasonal patterns and for fast computational results, the moving average (MA) technique is the best approach as all the weights are equally distributed to the data. As for other smoothing techniques such as weighted moving average (WMA), there is a need to choose the weighting factors in an ad hoc manner or through some estimation methods and is therefore impractical for our study. Detailed analysis on various smoothing techniques though desirable, is not the focus of this paper. In the following experiments, the OU-LPT parameters used are: N = 30, $$\ell = 5$$, $$\Updelta_{x}^{GD} = 4\,{\text{dB}}$$, m = 5, α = 0.60, $$\bar{\alpha } = 0.10$$ and smoothing window size of 10.
In Figs.  6 and 7, we display two snapshots of the triggering activities between the time 150–200 and 300–350 s respectively. From the figures we can see that the prediction mechanism is able to issue a trigger at a very early stage in preparation before the smoothed RSSI values fall below the LGD threshold. Furthermore, the presence of small number of false trigger (or false alarm) and false non-trigger (missed trigger) attest the suitability of modelling the RSSI values as a stochastic process.
In Figs.  8 and 9, we display the triggering mechanism using the same set of parameters and time intervals for the OU jump diffusion process with the exception that the prediction of future smoothed RSSI values is obtained via a linear regression (LR) method. Here for a series of N consecutive smoothed RSSI values X t , X tt , X t+2Δt , …, X t+(N−1)Δt the model is defined as
$$X_{k} = \beta_{1} * t_{k} + \beta_{0} + \varepsilon ,\,\varepsilon \sim N\left( {0,\sigma_{\varepsilon }^{2} } \right)$$
(40)
where t k is the time index for X k ,
$$\beta_{1} = \frac{{\sum\limits_{k = t}^{t + (N - 1)\Updelta t} {(t_{k} - \bar{t})(X_{k} - \bar{X})} }}{{\sum\limits_{k = t}^{t + (N - 1)\Updelta t} {(t_{k} - \bar{t})^{2} } }},\,\beta_{0} = \bar{X} - \beta_{1} *\bar{t}$$
(41)
such that $$\bar{X} = \frac{1}{N}\sum\nolimits_{k = t}^{t + (N - 1)\Updelta t} {X_{k} }$$ and $$\bar{t} = \frac{1}{N}\sum\nolimits_{k = t}^{t + (N - 1)\Updelta t} {t_{k} }$$.
By comparing Figs.  6, 7, 8 and 9 we can see by using the linear regression approach there is a higher likelihood that a false trigger would occur as compared with the approach taken by the proposed OU-LPT technique. In this paper we only compare OU-LPT with LR as both models are linear in construction and hence we are assessing like-for-like. Take note that the proposed OU-LPT is based on stochastic process modelling of the velocity of the random movements of RSSI values whilst the LR only looks into the relationship between explanatory and response variables. On the other hand time series models are not considered in this study as they are too computationally intensive such as there is a need to perform stationary test of the data, model identification, parameters estimation as well as diagnostic checking before one can fully use it. Therefore due to time constraints in the triggering process we have to exclude this technique. Furthermore time series models are not as practical as our OU-LPT technique from the implementation point of view.
Table  2 shows the comparison of trigger statistics between OU-LPT and linear regression method. From the table we can see that using our proposed method there is a significant improvement in reducing the rate of committing false trigger (7.63 % out of 24.20 % of trigger occurrences) as compared with the conventional linear regression method (38.10 % out of 36.49 % of trigger occurrences) which is a brute strength method without taking into account of modelling fat-tails distribution. However the percentage of committing a false non-trigger for either both methods are quite comparable (10.46 and 9.90 %). We also notice that, the percentage of false non-trigger (missed trigger) = 10.46 % whilst the percentage of false trigger (false alarm) = 7.63 %. The discrepancy can be due to the high volatility of the signal as well as the selection of the protection margin of 4 dB in which most of the RSSI values reside near −76 dB. This observation is therefore specific to Intel’s dataset. In Fig.  10 we show the error analysis of both methods and from the histogram plots we can deduce that the errors generated from both methods are approximately normal distributed. However by comparing the two approaches we can see that the errors from OU-LPT tend to have a smaller standard deviation, and hence the errors are less dispersed.
Table 2
Trigger results for Intel dataset of OU-LPT and linear regression techniques
Description of Trigger
OU-LPT (%)
LR (%)
Improvement (%)
Triggers
24.20
36.49
−12.29
False triggers
7.63
38.10
−30.47
Non-triggers
75.80
63.51
+12.29
False non-triggers
10.46
9.90
+0.56
Although both methods have comparable lead time which is the time difference between the first successful trigger until the signal strength goes below the LD threshold, by reducing the chances of making a false trigger or missed trigger, the proposed OU-LPT technique is by far a more reliable method than linear regression.

### 4.2 OU-LPT module design and real-time implementation

Fast variations of radio channel characteristics entail the need of smoothing mechanism introduction to deal with raw data measurements, as well as to avoid incorrect decisions based on temporary measured values of parameters. To overcome this problem, a double level averaging process has been implemented within a measurement modules and monitoring aggregation module as well. Additional long-term statistics repository has been created to provide feedback mechanism for routing with link stability description. Also link related events are predicted and reported by link triggering and prediction module. It determines state change and predictive events as shown in Table  3 as defined in IEEE802.21 [ 21]. In mesh environment, the reliability of wireless backhaul links is extremely critical as any link disruption may affect more than one node. For that reason, such predictive triggers are particularly important to ensure carrier-grade performance.
Table 3
Link state prediction result or trigger
Output/prediction
Description
Event type
State change
High probability of the link losing its connection status
Predictive
The Link is above the threshold value
State change
The probability of the link recovering its signal is high
Predictive
Figure  11 presents the general architecture of the OU-LPT module. The Data Aggregator submodule is responsible for pulling the required raw data. The data, which is generally retrieved on a fixed interval, can be RSSI, throughput or delay depending on the usage requirement. After gathering of the required data, these values would then be passed on to the Data Pre-processor submodule. The Data Pre-processor submodule is generally responsible to prepare the data before passing on to the prediction submodule. The tasks include reformatting, synchronization with real-time clock, re-sampling and smoothing as required by the predictor submodule. The smoothing process aims to reduce the fluctuation in the raw signal values and also helps to convert the time series data into a data set with fewer fluctuations. This will help prevent unnecessary triggers later on. As the name implies, predictor submodule analyzes the time series data and predict the future state of the link i.e. LGD. Alternatively, other conventional prediction algorithms such as Linear Regression, Lagrange or Newton extrapolation, etc., can be used. The link state prediction results, together with computed data such as trigger and errors probabilities are subsequently stored in Data Buffer. This prediction data can also be stored back inside some repository for further processing.
The OU-LPT module has been implemented using C and runs on a relatively slow Soekris net5501 500 MHz 586 class embedded system board together with other modules defined in the CARMEN project for the mesh node architecture (resource aware routing, admission control, spectrum management, monitoring, self-configuration and support for mobile users). The OU-LPT module comprises of around one thousand over lines of code and utilizing only C native libraries. When comparing with Matlab implementation, the real-time implementation posts a greater set of challenges as there are different options to implement the same math function. The decision on which approach to be adopted will have an impact on the accuracy and computation time. In our implementation, the average time required to generate a prediction using the above-mentioned board with 512 Mbyte DDR-SDRAM running UBUNTU LTE 8.04 operating system is less than 28 ms. This is much lower than 100 ms, which is the time interval in order for a prediction to be useful. The prediction process also consumes around 6.9 MIPS of CPU usage. Although the OU-LPT module is able to do real-time capture, we have instructed it to read the same pre-recorded datasets (from Intel and Fraunhofer) as if it is acquiring the signal from the WLAN card in real-time to enable direct comparison between the performance of real-time implementation and Matlab implementation. Table  4 summarizes the parameters used:
Table 4
Default settings for OU-LPT operation
 General OU- LPT Data sample interval/step size 100 ms Moving average window size 10 Jump diffusion algorithm Prediction Window size, N 30 Prediction steps (or look ahead time) 5 steps (500 ms) Protection margin 4 db LD threshold −80 dBm
Figure  12 compares the predicted RSS values of real-time and Matlab simulation using the Intel dataset. As observed, the predicted values are relatively close. On average however, Matlab produces less bursty values across the whole data set. This can be observed around time 3,400 s, where the real-time result shows higher variance compared to Matlab result.
Figure  13 shows the error histogram comparing Matlab and real-time OU-LPT technique. The result shows that the predictions generated by the real-time OU-LPT are only slightly deviated from the predictions generated by Matlab. When investigating across time, the predicted values are basically the same except at certain points when differences appear. The errors mainly occur during the drastic change of signal strength at around 3400 s (Fig.  12). This is confirmed by fat-tail effect at the left hand side of the scatter plot in Fig.  14. It is also observed that the errors generally occur at lower RSS values (<−98 dBm) therefore may not affect the accuracy of triggers. The discrepancies between Matlab and real-time C implementation are believed to be caused by different approaches in implementing certain mathematical functions such as the integration.
Figure  15 shows the analysis using another dataset (contributed by Fraunhofer FOKUS). This data does not have drastic drop of signal strength and even there is, the drop is very gradual and hence easy to be predicted.
It can be observed from Figs.  16 and 17 that when analyzing the dataset provided by FHG, the error between the Matlab and real-time is relatively small. The issue of fat tail does not exist in this case. This is due to the less drastic trend in signal fluctuation. The above evaluation shows that the OU-LPT technique can be implemented with relatively small computation overhead and complexity.
The OU-LPT graphical visualizer has also been developed using GTK toolbox ( http://​www.​gtk.​org) which is also part of the GNU project. It is a cross-platform widget toolkit used to develop GUI. Figure  18 gives a snapshot of the OU-LPT visualizer.
The visualizer shows the current smooth signal and also the predicted signal. The yellow markers indicate the LGD trigger events while the red markers indicate the LD events. The dark blue line represents the LD threshold. Other statistics such as trigger probability and false trigger probability of each trigger can be computed and shown in real-time.

## 5 Conclusions

Monitoring system is an integral part of every wireless mesh network. It provides to other modules accurate and timely information regarding the status of a network as well as to predict the quality of the wireless link. The results of prediction are used to reconfigure the network in advance to avoid service disruption. This paper proposes an novel link prediction and triggering technique based on a modified mean-reverting diffusion process. The analysis shows that the proposed OU-LPT method can significantly enhance the reliability of wireless links which is particularly critical in wireless mesh environment. A significant improvement has been observed in reducing the rate of committing false trigger (from 38.1 to 7.63 % out of total trigger occurrences) as compared with the conventional linear regression method. The proposed method also incurs a very small percentage of false trigger when compared to the conventional linear regression method. On top of that when comparing the errors, OU-LPT experiences a smaller standard deviation implying that the errors are less dispersed. The link-up scenario is not addressed in this paper because it generally operates in the direct opposite manner as link-down scenario. The prediction on link-up however can be used for early preparation of link to its normal operation. The proposed OU-LPT algorithm has also been successfully implemented and evaluated using a real-time embedded system board. Overall the OU-LPT technique is found to be promising and it offers a new direction on how wireless link prediction, triggering and switching process can be conducted in the future.

## Acknowledgments

This work was partially funded by the European Commission within the 7th Framework Program in the context of the ICT project Carrier-Grade Mesh Networks (CARMEN) (Grant Agreement No. 214994). The views and conclusions contained here are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the CARMEN project or the European Commission. The authors would also like to thank Intel and Fraunhofer FOKUS for contributing the datasets.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

## Appendix

The following are the steps to derive the probabilities of making a false trigger (or false alarm) and making a false non-trigger (or missed trigger) for the link handover process.
\begin{aligned} P({\text{false trigger}}) = P(\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} |X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} ) \\ & = \frac{{P(\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} ,\,X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} )}}{{P\left( {X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right)}} \\ & = \frac{{P(X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} ,\,\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} )}}{{P\left( {X_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right)}} \\ & = \frac{{P\left( {Z > \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X}_{t + \ell \Updelta t} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }},\,\hat{X}_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right)}}{{P\left( {Z > \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X}_{t + \ell \Updelta t} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)}} \\ & = \frac{{\int_{ - \infty }^{{\bar{X} + \Updelta_{x}^{GD} }} {\left[ {1 - P\left( {\left. {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right|\hat{X} = x} \right)} \right]} \,f_{{\hat{X}}} (x)dx}}{{1 - \int_{ - \infty }^{\infty } {P\left( {\left. {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right|\hat{X} = x} \right)\,f_{{\hat{X}}} (x)dx} }} \\ & = \frac{{\int_{ - \infty }^{{x = \bar{X} + \Updelta_{x}^{GD} }} {\left[ {1 - \Upphi \left( {\frac{{\bar{X} + \Updelta_{x}^{GD} - x - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)} \right]} \,f_{{\hat{X}}} \left( x \right)dx}}{{1 - \int_{ - \infty }^{\infty } {\Upphi \left( {\frac{{\bar{X} + \Updelta_{x}^{GD} - x - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)\,f_{{\hat{X}}} \left( x \right)dx} }}. \\ \end{aligned}
(42)
\begin{aligned} P({\text{false}}\,\,{\text{non - trigger}}) = P(\hat{X}_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} |X_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} ) \\ = \frac{{P(\hat{X}_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} ,\,X_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} )}}{{P\left( {X_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right)}} \\ & = \frac{{P(X_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} ,\,\hat{X}_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} )}}{{P\left( {X_{t + \ell \Updelta t} \le \bar{X} + \Updelta_{x}^{GD} } \right)}} \\ & = \frac{{P\left( {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X}_{t + \ell \Updelta t} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }},\,\hat{X}_{t + \ell \Updelta t} > \bar{X} + \Updelta_{x}^{GD} } \right)}}{{P\left( {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X}_{t + \ell \Updelta t} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)}} \\ & = \frac{{\int_{{\bar{X} + \Updelta_{x}^{GD} }}^{\infty } {P\left( {\left. {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right|\hat{X} = x} \right)} \,f_{{\hat{X}}} (x)dx}}{{\int_{ - \infty }^{\infty } {P\left( {\left. {Z \le \frac{{\bar{X} + \Updelta_{x}^{GD} - \hat{X} - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right|\hat{X} = x} \right)\,f_{{\hat{X}}} (x)dx} }} \\ & = \frac{{\int_{{x = \bar{X} + \Updelta_{x}^{GD} }}^{\infty } {\Upphi \left( {\frac{{\bar{X} + \Updelta_{x}^{GD} - x - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)} \,f_{{\hat{X}}} \left( x \right)dx}}{{\int_{ - \infty }^{\infty } {\Upphi \left( {\frac{{\bar{X} + \Updelta_{x}^{GD} - x - \mu_{{\hat{\varepsilon }}} }}{{\sigma_{{\hat{\varepsilon }}} }}} \right)\,f_{{\hat{X}}} \left( x \right)dx} }}. \\ \end{aligned}
(43)

## Unsere Produktempfehlungen

### Premium-Abo der Gesellschaft für Informatik

Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.

Literatur
Über diesen Artikel

Zur Ausgabe