main-content

## Weitere Artikel dieser Ausgabe durch Wischen aufrufen

01.12.2015 | Research | Ausgabe 1/2015 Open Access

Zeitschrift:
EURASIP Journal on Wireless Communications and Networking > Ausgabe 1/2015
Autoren:
Xiaoyu Duan, Auon Muhammad Akhtar, Xianbin Wang
Wichtige Hinweise

## Competing interests

The authors declare that they have no competing interests.

## 1 Introduction

Over the last decade, cellular networks have witnessed an unprecedented growth in mobile data traffic, mainly due to the explosive development of smart and media-rich mobile devices. These smart devices enable ubiquitous mobile internet access, traffic-intensive social applications, and cloud-based services [ 1]. According to Cisco’s networking visual index report [ 2], the data traffic is expected to grow at a compound annual rate of 57 % by 2019, a tenfold increase over 2014. Current cellular networks are not capable of sustaining such high traffic volumes. Consequently, fifth generation (5G) is being touted as the next-generation cellular standard. The 5G networks are envisioned to have a densified heterogeneous network (HetNet) architecture, combining multiple radio access technologies (multi-RATs) into a single holistic network [ 3]. In order to optimize resource utilization and to support the predicted traffic volumes, 5G is expected to utilize a multitude of technologies including device-to-device (D2D) communications, data offloading, load balancing, spectrum sharing, etc.
All of the aforementioned works show significant performance improvements in terms of resource management, however, technical challenges still exist, especially when the envisioned 5G HetNet architecture is taken into consideration. Firstly, the uncoordinated Wi-Fi cells will be deployed overlay to the heterogeneous cellular cells [ 12, 13], resultantly, resource management will be challenging in this two-tier architecture. Secondly, offloaded data will be routed directly to Internet through the Wi-Fi backbone, which is not under control of the wireless operators since the Wi-Fi networks are usually owned by third parties. Consequently, the operators are unwilling to lose control over the valuable subscriber information [ 1], not to mention the security risks introduced due to the loosely controlled nature of the Wi-Fi networks. Moreover, existing network protocols have been designed for current levels of network density, and thus, they will become the performance bottlenecks in case of the highly densified small cell deployments in 5G. For example, existing load balancing algorithms are mostly distributed, which causes ping-pong handovers due to the lack of global information [ 14]. There will also be unnecessary frequent mobility management messages exchanged in dense cells. Therefore, it comes as no surprise that operators are seeking to introduce intelligence while exerting greater controls in 5G HetNet resource management [ 15].
In this paper, we consider software-defined networking (SDN) [ 16], a programable network structure, as an enabling solution to apply intelligence and control in 5G HetNets [ 15]. The idea of programmable networks has been around for many years. Nevertheless, the advent of the Openflow interface has given a new life to SDN [ 17]. OpenFlow was first introduced in [ 18], where the authors provide a uniform interface for researchers to program flow entries and to run experiments on Ethernet switches, without having any knowledge about the internal workings of the switch. In the context of wireless communications, the authors in [ 19] introduce OpenRoads—an open SDN platform which improves mobility management in HetNets. Related work in [ 20] further provides an SDN approach for handover management in heterogeneous networks and the real-time test bed shows significant performance improvement in the QoS of the real-time videos.
The remainder of this paper is organized as follows: Section 2 outlines the network model considered in this work. SDN-based data offloading and load balancing schemes are presented in Section 3. The performance of SDN and the proposed algorithms is analyzed in Section 4. Simulation results are shown in Section 5 while the conclusions are drawn in Section 6.

## 2 Network model

We consider a HetNet environment consisting of cellular BSs and Wi-Fi APs, as shown in Fig. 1. The BSs communicate over the licensed band while Wi-Fi APs communicate over the unlicensed band. The Wi-Fi APs are located randomly within each cell. OpenFlow protocol is implemented onto BSs, APs, and switches in order to enable the SDN controller to easily control these network components via OpenFlow secure channel. It is assumed that all of the BSs and the Wi-Fi APs are connected to the OpenFlow switches, as depicted in Fig. 1. For the cellular network, the switches are co-located with the BSs, i.e., each BS has its own OpenFlow-enabled switch. While the switches of the macrocells are connected to the core network, the femtocell switches are connected directly with the Internet. On the other hand, for the Wi-Fi APs, the switches are located within the ISP infrastructure and each switch is connected to multiple Wi-Fi APs. All of the switches are controlled by a centralized SDN controller. Given the fact that the controller is only a program running on a server, it can be placed anywhere within the network, even in a remote data center [ 21].
Since the SDN controller is a fundamental component of the proposed architecture, it is imperative to discuss the SDN framework in detail. The controller application consists of three core modules, namely, authentication and charging (AC) module, offload manager (OM), and the load balancing (LB) module. The AC includes the identity management and charging record generation functionalities, which are used to enforce admission control and subscription-based charging, respectively. Once authenticated by the AC, a mobile user can access all of the available network resources. These resources are either owned by the network operator or they are leased from the available Wi-Fi networks. Next, for the OM module, the service-level rules describe the traffic features and characteristics which are required by the offload manager, e.g., related IP addresses, port numbers, bit rates, and delay sensitivity. These features and characteristics are based on the traffic flow template (TFT) filter [ 22] and the related QoS descriptions. Finally, the LB module includes the load measurement and mobility management functionalities, which collect cell load ratio and execute the load balancing algorithms. The complete SDN framework is presented in Fig. 2, where it can be seen that the SDN applications (AC, OM, and LB) utilize the global view of the APs to improve network management.
Note that the following proposed algorithms are based on a pre-condition that the cellular operator owns the Wi-Fi APs or there are mutual agreements between them. In this way, Wi-Fi resources are available to release cellular network burden, and all the cell loads are collected and monitored easily in load balancing procedure.

## 3 SDN-based resource management

In this section, we present SDN-based resource management algorithms for the heterogeneous network shown in Fig. 1. To this end, we attempt to alleviate spectrum shortage and network congestion in the cellular network by using SDN-enabled mobile data offloading and load balancing. More specifically, to address the shortage of spectrum, data from the cellular network is offloaded onto a Wi-Fi network whenever the cellular users move within the Wi-Fi range. On the other hand, in order to deal with network congestion, load balancing is utilized to evenly distribute traffic across multiple cells. For both these cases, all of the decision-making is accomplished by the centralized SDN controller. As mentioned previously, the introduction of SDN facilitates efficient coordination between the cellular and Wi-Fi networks. Moreover, the performance of load balancing can also be improved significantly by exploiting the controller’s view of the whole network. In the next subsection, we present an SDN-based data offloading algorithm.

In this subsection, we present an SDN-based partial data offloading algorithm. Here, partial offloading refers to the fact that only part of the user data is offloaded onto the Wi-Fi network, while the remaining traffic is transferred across the cellular network. To elaborate further, if the Wi-Fi network is unable to meet the requirements of the delay and loss sensitive flows, OM only offloads a limited amount of data. As we show later through analysis and simulations, the proposed algorithm improves application performance by taking various service requirements into consideration.
The proposed algorithm proceeds as follows: when a mobile user moves within the Wi-Fi coverage area and starts data transmission, the data will be queued in OpenFlow switch and awaits further processing. The OM then executes a delay-threshold-based selection algorithm, which selects the appropriate traffic flows for partial offloading. Due to the fact that Wi-Fi provides high data rates without QoS guarantees in the unlicensed band, the choice of offloading traffic should be discreet. Our goal here is to reduce cellular usage by leveraging Wi-Fi connectivity when available, but to do so without affecting application performance [ 7]. The algorithm used to select the appropriate flows for offloading works are explained below:
Step 1) The OM collects the traffic flow requirements of all the users within the Wi-Fi area and calculates the resources required by each existing user (notice that with SDN, the operator can easily incorporate various functionalities in the OM to collect required information from users). Assuming each user has a traffic demand u i and a link rate $${r^{w}_{i}}$$ to the Wi-Fi AP, every Wi-Fi user’s resource demand is computed as $${\theta ^{w}_{i}} = u_{i}/{{r^{w}_{i}}}$$ [ 23]. The available Wi-Fi resources for offloading can then be estimated accordingly.
Step 2) The OM calculates the amount of data that can be transferred within the delay tolerance threshold over the Wi-Fi network. If Wi-Fi cannot transfer all of the data before the deadline, partial offloading is executed. Otherwise, all of the data is offloaded onto the Wi-Fi network. Next, we calculate the application specific delay tolerance threshold, T s , and the amount of data, V s , that can be transferred within T s .

#### 3.1.1 3.1.1 Calculation of T s

OM estimates user mobility direction and predict future paths [ 21]. Assume that a mobile user enters and leaves a Wi-Fi offloading area at times t in and t out, respectively, as shown in Fig. 3. Consequently, the Wi-Fi connection time is t c = t outt in. Figure 3 also shows the residual time of t c , i.e., t r , which is the duration of data transfer from t. In this scenario, the delay tolerance threshold T s is equal to the difference between the application specific deadline T d , which is obtained from the TFT filter in the network model, and the waiting time during SDN processing, denoted by D. The calculation of D will be explained later in the next section.

#### 3.1.2 3.1.2 Calculation of V s

Note that V s is the amount of data in bytes that will be transmitted. Let b 1 and b 2, respectively, denote the bandwidths allocated by the cellular network (primary resource) and the Wi-Fi network (secondary resource). As shown in [ 8], the smaller value between t r and T s should be used to calculate the actual transmitted data volume. Therefore, V s can be written as follows:
$$V_{s}=b_{1} T_{s}+b_{2}\ \text{min}(t_{r},T_{s}).$$
(1)
Based on the value of V s , OM decides between partial and total offloading, as shown in Algorithm 1.
Here, the percentage of b 1 and b 2 are specified in next section. The principle behind Algorithm 1 is elaborated below: if d> V s2, it will be split into two parts. The first part, d 1, is the volume that can be transmitted through the Wi-Fi network before the deadline, while the remaining part, i.e., dd 1, will be transmitted through the cellular network at the same time. Updating d enables the network to keep track of the volume that has been transmitted in the two paths. The procedure ends once all of d has been transmitted.
Note that the delay caused by packet re-assembling process in application layer will be negligible. The reason is that in practical implementation, the data is not distributed on per packet basis, instead, it is distributed in bursts between the cellular and Wi-Fi networks. With concurrent transmission, these bursts will arrive at the receiver in approximately the same time, thus minimizing the waiting period required for all the packets to arrive at the receiver device. Therefore, partial offloading method reduces the overall delay in the packet arrival process.
Step 3) OM updates the record of available Wi-Fi resources after offloading and the QoS that Wi-Fi can provide during partial data offloading (PDO) algorithm. This record is maintained for a specific amount of time, and it enables rapid decision making when a new traffic flow arrives.

### 3.2 SDN-based load balancing mechanism

Load balancing [ 6] has the ability to reduce network congestion over an area by distributing user traffic across neighboring APs or BSs. With load balancing, a proportionate share of wireless traffic can be guaranteed for better resource utilization. Since it is difficult to guarantee QoS with Wi-Fi, load balancing and vertical handovers of the edge users are seen as the enabling solutions to tackle network congestion. Both these strategies improve QoS by equally distributing the traffic load across the network [ 4].
Although load balancing has its advantages, yet, there are scenarios where it can prove to be counter-productive, e.g., in low-latency applications such as voice or live (unbuffered) video streaming. More specifically, for mobile users, a high number of handovers impacts the voice quality and it makes the streaming video jittery. With SDN-based load balancing, however, one can take advantage of the controller’s view of the whole network. This makes it easier to find the optimal neighboring cell for load balancing with minimum handovers. SDN-based load balancing proceeds as follows:
Step 1) When a source cell i becomes overloaded, it sends a request to the controller, enquiring about the target neighboring cell for load balancing. A cell i is identified as overloaded when its load ratio LR i exceeds a certain threshold. The load ratio LR i is formulated as [ 6]:
$$\text{LR}_{i}= w_{1}*U_{i}+w_{2}*R_{i},$$
(2)
where U i is the proportion of the number of user equipments (UEs) to cell i’s maximum UE capacity, R i is the ratio of the used resource blocks to the total resource blocks in a cell i, while w 1 and w 2 represent the weight parameters which provide the operators with the option to give higher preference to either U i or R i .
Step 2) The LB module calculates all of the neighboring cells’ environment state ES j , which is the average load state of each neighbor cell j’s adjacent cells, excluding cell i (denoted as layer 2 in Fig. 4).
The environment state of the neighboring cell j can be written as
$$\text{ES}_{j}= (\text{LR}_{1j}+\text{LR}_{2j}+...+ \text{LR}_{nj})/n,$$
(3)
where n is the number of the neighbor cell j’s layer 2 cells.
Step 3) The LB module computes the overall state OS j of each neighboring cell of the source cell i and selects the target cell with smallest overall state value for load balancing.
The overall state of each neighboring cell j is a combination of its own load and its environment [ 24], which is computed as:
$$\text{OS}_{j}= \mu \text{LR}_{j}+(1-\mu) \text{ES}_{j},$$
(4)
where μ is the influence degree of the neighbor’s load and its environment. μ is decided by network operator according to network performance, i.e., whether neighbor cell’s load or its environment is more important for network performance improvement in a specific type of network. It is important to mention that the proposed algorithm only decides the appropriate target cell. In order to maintain radio link quality after handover, the choice of edge users remains the same as that with the existing algorithms [ 6]. That is to say, only the source cell edge users, which are close to this selected target cell with good enough link quality will be handed over to the target cell.
In the traditional distributed load balancing scenarios, there is a possibility that multiple overloaded cells choose the same target cell for load balancing, causing a new overload situation or ping-pong handovers [ 6, 14]. Distributed solution thus takes more rounds to achieve an optimized state. However, SDN-based load balancing mechanism uses an overall network view when selecting the target cell, which decreases handover times and improves system performance.
Moreover, by maintaining a list of BSs, SDN-based load balancing encourages new clients to associate with the least loaded BS upon arrival. Then, after a certain amount of time, if there exists an overloaded cell, the controller uses load balancing mechanism to handover appropriate cell edge users. Along with partial data offloading algorithm, it is expected that SDN-based module framework will achieve an optimized overall system state.

## 4 Performance analysis

In this section, we analyze the performance of the resource management schemes introduced in the previous section. To this end, we begin by formulating the delay incurred due to the processing at the SDN-based controller and switches.

### 4.1 SDN network delay D

Generally, the average delay introduced by SDN depends on the state of the flow table within the switch, i.e., whether or not the switch’s flow table contains a rule for the incoming traffic flow. Figure 5 shows the queuing model for the controller and the switch.
In Fig. 5, λ is the data arrival rate while μ s and μ c represent the processing rates at the switch and the controller, respectively. As shown in the figure, if the packet arriving at the switch is the first packet of a new data flow (i.e., no flow entry in switch matches its IP addresses obtained from TFT filter), the switch forwards this packet to the controller over the OpenFlow channel. The controller decides the optimal forwarding rule for this packet and returns it to the switch. Moreover, the controller also pushes the corresponding forwarding rule (flow entry) into the flow table at the corresponding switches. Subsequent packets from the same flow are forwarded by switches directly based on the newly installed forwarding rule. It is worth mentioning here that the following analysis is based on the assumption that the incoming traffic is the transmission control protocol (TCP), i.e., a given source node initially transmits a single packet to initiate the TCP handshaking procedure and the actual data is transmitted once the TCP session is established. As a final remark, it should be noted that we did not go into other details about TCP, since SDN model is the focus of this research. Nonetheless, the TCP congestion window applies to the performance with and without SDN, so its impact will remain the same.
Assuming P 1 represents the probability that there is no flow entry in the OpenFlow switch for the incoming packet, the average packet delay, D, can be written as
$$D=D_{1}\times P_{1}+D_{2}\times (1-P_{1}),$$
(5)
where D 1 is the delay incurred if the switch has to forward the packet to the controller while D 2 represents the delay which occurs when the switch forwards the data directly, i.e., a forwarding rule for the packet already exists. It has been shown in [ 25] that in a normal productive network carrying end-user traffic, a switch observes new flows with a probability of 0.04, hence, we use this value for the probability P 1.
The delay D 1 can be written as
$$D_{1}=T_{C}+2T_{\text{PROP}}+2T_{\text{sw}},$$
(6)
where T sw and T C , respectively, represent the delays at the switch and the controller, including the queuing and the processing delays. Moreover, T PROP denotes the propagation delay between the controller and the switch. Note that T sw in ( 6) is multiplied by 2 since the packet has to make two passes through the switch. In the first pass, it is forwarded to the controller for further processing, and in the second pass, it is forwarded along the data path once a forwarding rule has been established. On the other hand, 2 T PROP accounts for the propagation delays when the packet is sent to and from the controller. The delay D 2 is equal to T sw only, i.e.,
$$D_{2}=T_{\text{sw}}.$$
(7)
Next, we derive the delays T sw and T C . Here, it is imperative to mention that in order to simplify the analysis, it is assumed that the packets returning from the controller have no impact on the queuing delay at the switch. This assumption is reasonable for two reasons: firstly, the probability P 1 is relatively small. Secondly, the size of the data returning from the controller is also very small as compared to the size of the data buffered in the queue, i.e., controller returns only a single packet at a given time instance while the traffic at switch arrives in the form of multiple flows containing a large number of data packets. The validity of this assumption is verified later through simulations in Section 5. At this stage, it is worth mentioning that although these returned packets have little influence on the queuing delay for other arriving flows, nevertheless, the performance of the flows that do have to go through the controller will still be impacted. That is why the controller processing delay T c cannot be ignored.

#### 4.1.1 4.1.1 Tsw

In order to derive T sw, the inter arrival process at the switch is modeled with Pareto distribution [ 2628], with shape parameter α and scale parameter k [ 29]. Furthermore, the switch processing times are modeled as Poisson distributed with rate parameter μ c . With the aforementioned assumption and the Pareto/ M/1 queuing model, the switch waiting time T sw is given as [ 28]
$$T_{\text{sw}}=\frac{1}{\mu_{s} (1-z)},$$
(8)
where z is the root of the Laplace transform of the inter-arrival time distribution function. The root z is given by ( 9) shown on top of the next page[ 26]. In ( 9), Γ (− α+3, μ(1− z))= Γ(− α+3) P(− α+3, μ(1− z)), where P(− α+3, μ(1− z)) is the incomplete gamma function.
{\fontsize{7.8pt}{12pt}\selectfont{ \begin{aligned} {} z \,=\,\alpha\left[\mu (1\,-\,z)\right]^{\alpha}\! &\left\{\frac{1}{\alpha e^{\mu (1\,-\,z)}[\!\mu (1-z)]^{\alpha}}\right.\\ &\quad \left.-\frac{1}{\alpha(\alpha\,-\,1) e^{\mu (1-z)}\left[\!\mu (1\,-\,z)\right]^{\alpha-1}}\right.\\ &\quad \left.+\frac{1}{\alpha(\alpha\,-\,1)(\alpha\,-\,2) e^{\mu (1-z)}\left[\!\mu (1\,-\,z)\right]^{\alpha-2}}\right.\\ &\quad \left.-\frac{1}{\alpha(\alpha\,-\,1)(\alpha\,-\,2)}\left[\!\Gamma(-\alpha\,+\,3)\,-\,\Gamma^{*} (-\alpha+3,\mu (1\,-\,z))\right]\!\right\} \end{aligned}}}
(9)

#### 4.1.2 4.1.2 T C

The arrival rate at the controller is Poisson distributed since the departure rate at the switch is Poisson [ 30]. Therefore, T C can be calculated using the waiting time equation for M/ M/1 queuing as
$$\begin{array}{*{20}l} & T_{C}=\frac{1}{\mu_{c} (1-\rho_{c})}. \end{array}$$
(10)
In the above equation, ρ c represents the controller utilization and it is equal to P 1. λ/ μ c (see Fig. 5).
Using ( 6)–( 10) in ( 5), one can find the average packet delay with SDN-based data forwarding.

### 4.2 Performance analysis of SDN-based PDO algorithm

In [ 31], Li et al. prove that at the application level, the size of most of the web traffic, including multimedia files and Internet documents, is a combination of long-tailed distribution process and forms Pareto distribution. Therefore, this paper considers the data file d to be transmitted in the offloading session with Pareto distribution. Resultantly, the cumulative distribution function F d ( X), which is the probability that d is smaller than some number X, is formulated as:
$$F_{d}(X)=1-(\frac{k}{X})^{\alpha}\ forX \geq k$$
(11)
In order to analyze the performance of the partial data offloading scheme, we use two key performance indicators: the first indicator is the delay threshold miss probability, P miss, which is the probability that the Wi-Fi network is unable to meet the service latency requirements; the second indicator is amount of data d 1 that is offloaded from the cellular network. The probability P miss is given as
$$P_{\text{miss}} = Pr[d>V_{s}].$$
(12)
Using the cdf of d from ( 11) and the value of V s from ( 1), the above equation can be written as [ 8]
\begin{aligned} P_{\text{miss}} &=\int_{t=0}^{T_{s}} \left[1-F_{d}(b_{1}T_{s} + b_{2}t)\right] r_{c}(t)dt \\ &\quad +\left[1-F_{d}((b_{1}+b_{2})T_{s})\right] \int_{t=T_{s}}^{\infty} r_{c}(t)dt, \end{aligned}
(13)
where r c (.) is the probability density function of the residual time t r . From ( 13), it can be seen that the probability P miss consists of two parts: the first part reflects the scenario where the time t that a user spends in the Wi-Fi connection area is smaller than the application deadline time T s . In this situation, the transmission volume in the Wi-Fi network is b 2 t. On the other hand, the second part of the equation addresses the scenario where the duration of a user’s stay in the Wi-Fi area is larger than the application deadline time. Accordingly, the transmission volume of Wi-Fi network in this situation is equal to b 2 T s .
By setting b 1 in ( 13) equal to 0, one gets P miss for the scenario when all of the traffic is offloaded onto the Wi-Fi network. Recall that t r is the residual life of t c . According to the distribution of the asymptotic residual life from the renewal theory [ 32], the probability density function r c (.) can be formulated as:
\begin{aligned} \ r_{c}(t_{r}) &=\frac{1-P(T<t_{r})}{E[t_{c}]} =\frac{1-F_{c}(t_{r})}{E[t_{c}]}, \end{aligned}
(14)
where E[ t c ] is the expected value of the connection time t c while F c (.) is the distribution function of t c .
Assuming that t c follows Erlang distribution with parameters ( n, λ e ), which is a widely used traffic model, the distribution function F c (.) of connection time t c can be formulated as [ 32]
\begin{aligned} \ &F_{c}(t_{c})=\frac{\gamma(n,\lambda_{e} t_{c})}{(n-1)!}=1-\sum_{m=0}^{n-1}\frac{1}{m!}e^{-\lambda_{e} t_{c}}(\lambda_{e} t_{c})^{m}, \end{aligned}
(15)
where γ(.) is the lower incomplete gamma function and E[ t c ] is equal to n/ λ e . By using ( 14) and ( 15) in ( 13), one obtains P miss as
{\fontsize{8.6pt}{12pt}\selectfont{ \begin{aligned} {} P_{\text{miss}} &\,=\, \left(\frac{\lambda_{e}}{n}\right)\sum_{m=0}^{n-1}\int_{t=0}^{T_{s}}\left(\frac{k}{b_{1}T_{s}\,+\,b_{2}t}\right)^{\alpha}\frac{e^{-\lambda_{e} t}(\lambda_{e} t)^{m}}{m!}dt\\ &\quad+\!\left(\!\frac{k}{(b_{1}\,+\,b_{2})T_{s}}\!\right)^{\alpha}\!\left\{\!1\,-\,\left(\frac{1}{n}\right)\sum_{m=0}^{n-1}\!\left[\!1\,-\,\sum_{j=0}^{m}\frac{e^{-\lambda_{e} T_{s}}(\lambda_{e} T_{s})^{j}}{j!}\!\right]\!\right\}\!, \end{aligned}}}
(16)
where T s = T d D. P miss indicates the performance of partial offloading scheme as a function of the delay tolerance threshold T s . One can find the improvement of proposed algorithm by comparing P miss under various primary bandwidths b 1.
The second indicator, the amount of offloaded data, i.e., d 1, which is served by the Wi-Fi network can be expressed as:
d_{1}= \left\{ \begin{aligned} \ & \frac{b_{2}}{b_{1}+b_{2}}d, if\ d>b_{2} t_{r} \\ \ & b_{2} t_{r}, \text{otherwise}, \end{aligned} \right.
(17)
which means that if data volume is larger than the Wi-Fi capability, the data is transmitted concurrently in cellular and Wi-Fi networks. Otherwise, all the data is offloaded onto the Wi-Fi network. Eq. 17 can be re-formulated as
$$d_{1} = \frac{b_{2}d}{b_{1}+b_{2}}Pr[d>b_{2} t_{r}]+b_{2} t_{r}Pr[d \leq b_{2} t_{r}]$$
(18)
Using a similar procedure as that presented in [ 8], the above equation can be written as
$$d_{1} = \frac{b_{2}d}{b_{1}+b_{2}}[1-R_{c}\left(\frac{d}{b_{2}}\right)]+b_{2}\int_{0}^{d/b_{2}}t_{r}r_{c}(t_{r})dt_{r},$$
(19)
where R c (.) is the distribution function of t r and $$1-R_{c}(\frac {d}{b_{2}})$$ shows the probability that the value $$\frac {d}{b_{2}}$$ is larger than time value t r . Using ( 14), one obtains R c (.) as
\begin{aligned} \ R_{c}(t_{r}) &=\int_{t=0}^{t_{r}} r_{c}(t)dt\\ \ &=\left(\frac{1}{n}\right)\sum_{m=0}^{n-1}\left[1-\sum_{j=0}^{m}\frac{e^{-\lambda_{e} t_{r}}(\lambda_{e} t_{r})^{j}}{j!}\right] \end{aligned}
(20)
Finally, by replacing t r in the above equation with $$\frac {d}{b_{2}}$$ and using the resulting equation in ( 19), one gets the offloaded data volume d 1 as
\begin{aligned} {} d_{1} &=\frac{b_{2}d}{b_{1}+b_{2}}\left\{1-\left(\frac{1}{n}\right)\sum_{m=0}^{n-1}\left[1-\sum_{j=0}^{m}\frac{e^{-\lambda_{e} x}(\lambda_{e} x)^{j}}{j!}\right]\right\}\\ & +\frac{b_{2}}{\lambda_{e} n}\sum_{m=0}^{n-1}(m+1)\left[1-\sum_{j=0}^{m+1}\frac{e^{-\lambda_{e} x}(\lambda_{e} x)^{j}}{j!}\right], \end{aligned}
(21)
where x= d/ b 2. In particular, $$\frac {d_{1}}{E[d]}$$ describes the proportion of the data that has been offloaded onto Wi-Fi network.

### 4.3 Performance analysis of SDN-based load balancing algorithm

The performance of SDN-based load balancing mechanism is measured by the number of handover times, the equilibrium extent of the network and the throughput of the network.
Equilibrium extent is defined as the degree of load balanced across the entire network [ 6] and it can be written as
$$\nabla(t)=\frac{(\sum_{c} \rho_{c})^{2}}{|N|\sum_{c} (\rho_{c})^{2}},$$
(22)
where N is the number of cells and ρ c is the load density of cell c. Obviously, the network resource is better utilized if the load is more evenly balanced across the network. In this paper, we use the overall throughput of the network, which is calculated from the signal-to-noise ratio (SNR) and related bit error rate (BER), to measure the performance of the network resource utilization.

## 5 Performance evaluation

In this section, we evaluate the performance of SDN and the proposed algorithms. To this end, we begin by evaluating the performance of SDN-enabled switches and controller.

### 5.1 Performance evaluation of SDN

In this section, the performance of the SDN framework, shown in Fig. 5, is evaluated in terms of network utilization and the incurred delay. For the simulation setup, the service times of the controller and switch was set to be 0.33 ms and 9.8 μs, respectively [ 30, 33]. To account for different types of data traffic, we used different values for the shape parameter α of the Pareto arrivals at the switch. More specifically, values of α=1.5 and α=2.5 were used. Moreover, we also evaluate the performance when the arrival process at the switch is Poisson distributed. All the simulation results were averaged to 10,000 number of iterations.
Figure 6 shows the SDN delay as a function of the network utilization ρ. Recall that in the queuing model introduced in Section 4.1, it was assumed that the packets returning from the controller to the switch cause negligible delay in the switch queue. To justify this assumption, in Fig. 6, we also plot the SDN delay which occurs when the effect of the packets returning from the controller is not ignored (model 2 in Fig. 6). From the figure, it can be seen that the simulation results with and without the aforementioned assumption are approximately the same and therefore, our assumption is justified. Moreover, it can also be seen that the theoretical and the simulation results also match very closely with each other, thus verifying the validity of the analysis in Section 4.1.
As mentioned previously, different α values represent different types of application traffic [ 1]. It can be seen that with decreasing α, the network delay increases under the same utilization rate, which shows that the smaller traffic flows cause greater burden on the SDN network due to the higher arrival rate. Traditional telecommunication voice traffic, shown by Poisson arrival, has the lowest delay since it does not participate in the SDN offloading procedure. Based on the above discussion, it can be concluded that SDN-based solution is more suitable for applications which are less sensitive to latency.

In this section, we evaluate the performance of the proposed partial data offloading scheme. To this end, all the simulations were conducted in MATLAB and for each simulation round, a user randomly moved across the offloading area with residence time t c , which followed an Erlang distribution [ 8]. A download traffic of size d was initiated at a given time instance t whenever a user resided within the offloading area. For demonstration purposes, it was assumed that the session data traffic d had a shape parameter of α=1.5 and scale parameter of k= E[ d]( α−1)/ α [ 29]. The default Wi-Fi bandwidth, b 2, was set to 5 Mb/s.
Figure 8 plots the amount of offloaded data as a function of the average residence time, t r . It is obvious from the figure that when the average residence time is larger than 6 min, the amount offloaded data remains unchanged. The reason is that although the session offloads more data if the user stays longer in the offloading area, the maximum amount of offloaded data is limited by E[ d]. Figure 8 shows that the amount of offloaded data increases significantly with increasing E[ d]. This is reasonable because when a cellular network has higher loads, offloading will play a more significant role.

### 5.3 Performance evaluation of SDN-based load balancing

For the simulation, we consider a densified 37-cell (4-layered) hexagonal layout. All of the simulations are conducted in MATLAB. For the smaller cells, the inter site distance is set equal to 300 m [ 3] and the wrap around technique is used to avoid boundary effect [ 6]. We assume that every user requests a constant bit rate of 1 Mbps. Assuming a bandwidth of 10 MHz, the cell capacity in this case will be 15 UEs per cell. The main simulation parameters are given in Table 1 [ 6].
Table 1
Simulation parameters
Parameters
Values
System bandwidth
10 MHz
Cell layout
Hexagonal grid, 37 cell sites, with

wrap-around technique
150 m
Pathloss
−38.4−35.0 log10 R (distance between UE and AP)
Log-normal with standard deviation 8 dB
Antenna gain
−7 dB
eNB Tx power
46 dBm
Traffic model
CBR 1 Mbps full buffer traffic
Moreover, we artificially create heavy load concentration in the 37 densified cells to show the performance of load balancing algorithms. At the beginning, a total number of 365 UEs are uniformly dropped according to the setting: 7 cells with 20 UEs (overload), 15 cells with 10 UEs, and 15 cells with 5 UEs (low load). Each UE engages a random walk in the areas and changes direction every 2.5 s [ 24]. We assume half of the UEs would not move out of their original cell to ensure that the heavy load concentration will not be broken by UEs’ random walks.
For the simulation of proposed SDN-based load balancing mechanism, we set up two reference scenarios: traditional load balancing and distributed mobility load balancing (DLB) [ 6]. In traditional load balancing, the decisions are made based on the received signal strengths. On the other hand, in DLB, the handover parameter is adjusted dynamically according to cell load measurements $$\text {LOAD}_{c}=\text {min}(\frac {\sum _{u\in c} N_{u}}{N_{\text {tot}}},1)$$ [ 6], which is the ratio of required resources, N u , of all the users u to the total number of resources, N tot, in the cell c. That is to say, when cell load of the source cell exceeds a certain threshold, load balancing is executed and edge users are handed over to the target cell with higher SNR or lower load ratio, respectively.
Figure 9 depicts the number of handover instances as a function of the simulation runtime. A lower number of handovers is desirable since frequent handovers affect service quality and user experience. It can be seen from the figure that SDN-based LB has fewer number of handovers as compared to the other two scenarios. This happens because with the better knowledge of all the cell load states and trends, SDN-based LB is able to select the most suitable target cells more efficiently while DLB takes more rounds, consequently requires more time to achieve the optimized state.
Figure 10 shows the extent of equilibrium between cells as a function of the simulation runtime. It can be seen that with increasing simulation time, traffic load becomes more balanced with all the LB mechanisms. However, SDN-based LB outperforms the baseline methods. This is because the baseline methods only have a limited view of the network. Notice that at the start of the simulation, SDN-based LB performs worse than the other two methods. This happens because the baseline methods choose the neighboring cell with the lowest load as target cell for LB, which provides faster balancing. However, the target cell eventually becomes overloaded, if its surrounding environment is mostly highly loaded. On the other hand, with an overall view of the network, SDN-based LB uses the overall state to select the target cell and thus guarantees a long-term advantage in equilibrium extent.
The load balancing performance can be further verified by comparison of the network throughput, as shown in Fig. 11. Obviously, the trend of throughput is similar to the load balancing extent in Fig. 10. This is easy to understand: when the load is more balanced within the network, the resources are utilized more efficiently, rendering a higher throughput for the whole network.

## Competing interests

The authors declare that they have no competing interests.

## Unsere Produktempfehlungen

### Premium-Abo der Gesellschaft für Informatik

Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.

Literatur
Über diesen Artikel

Zur Ausgabe