Skip to main content
Top
Published in: EURASIP Journal on Wireless Communications and Networking 1/2019

Open Access 01-12-2019 | Research

An energy- and cost-aware computation offloading method for workflow applications in mobile edge computing

Authors: Kai Peng, Maosheng Zhu, Yiwen Zhang, Lingxia Liu, Jie Zhang, Victor C.M. Leung, Lixin Zheng

Published in: EURASIP Journal on Wireless Communications and Networking | Issue 1/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Mobile edge computing is becoming a promising computing architecture to overcome the resource limitation of mobile devices and bandwidth bottleneck of the core networks in mobile cloud computing. Although offloading applications to the cloud can extend the performance for the mobile devices, it may also lead to greater processing latency. Usually, the mobile users have to pay for the cloudlet resource or cloud resource they used. In this paper, we bring a thorough study on the energy consumption, time consumption, and cost of using the resource of cloudlet and cloud for workflow applications in mobile edge computing. Based on theoretical analysis, a multi-objective optimization model is established. In addition, a corresponding multi-objective computation offloading method based on non-dominated sorting genetic algorithm II is proposed to find the optimal offloading strategy for all the workflow applications. Finally, extensive experimental evaluations are performed to show that our proposed method is effective and energy- and cost-aware for workflow applications in MEC.
Notes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

With the development of computer network, cloud computing, as well as wireless sensor network, mobile devices (MDs) have become an indispensable part of people’s daily lives [17]. In addition, the development of cyber-physical-social systems and big data has further affected people’s life [814]. However, compared to a traditional device such as personal computer, a MD has certain limitations in computing power, storage capacity, especially for the battery capacity. Mobile cloud computing (MCC) brings new services and facilities to mobile users (MUs) to take full advantage of cloud computing [1520]. However, the remote cloud is usually located far away from the MUs, which may result in high network latency in the process of data transmission. This inevitably reduces the quality of user service (QoS), in particular, for some applications, such as workflow applications (WAs), which generally have strict execution deadlines. The task may fail to be finished if the transmission latency is too large.
To solve the issue of network latency, a new paradigm called mobile edge computing (MEC) has been proposed [2123]. MEC has become a key technology for realizing the Internet of Things and 5G [24]. The MEC can be regarded as a special example of MCC. A cloudlet is a type of edge server that provides various services to users in close proximity to MDs [2527]. That means it can reduce the latency and energy consumption by offloading the WAs to cloudlet.
It is assumed that the resources of cloud are unlimited in MCC. Meanwhile, the resources of the MEC are limited. If there are multiple MUs requesting services from the cloudlet at the same time, a queue latency will occur. When a MU requests a service that exceeds the ability of a cloudlet, the cloudlet service cannot be obtained. In order to ensure the successful execution of the WA, further consideration needs to be taken to execute the task locally or offload them to the cloud. Moreover, the execution of WA by cloudlet or cloud needs to meet the task deadline constraint, which further increases the difficulty of computing offloading.
Although the computation offloading issue in MCC has been well investigated (to list a few here [2833], they cannot be used directly in computation offloading in MEC. The main reason is that MEC and MCC have completely different architectures. Existing studies on computation offloading in MEC have mainly focused on the optimization of a single objective (time consumption or energy consumption) [34, 35], but seldom considers multi-objective optimization. It is still a challenge to balance the multi-objective. Moreover, they focus on general applications and pay little attention to computation offloading of WAs in MEC [36]. For WA, it is very important to consider time constraints [37, 38].
Inspired by the idea that the MU has to pay for the resources they used in cloudlet or the cloud [36], in this study, we consider a joint energy consumption, time consumption, and price-cost optimization for WAs in MEC while the completion time of WA is considered as a constraint condition. The main contributions can be summarized as follows:
(1) The computation offloading for WAs in MEC has been well investigated in this paper. Both the energy consumption and time consumption, as well as the cost of MU by using cloudlet or cloud, are considered as the optimization objectives. Besides, based on theoretical analysis, a multi-objective optimization model is established.
(2) We propose a method named multi-objective computation offloading method for WAs (MCOWA) based on non-dominated sorting genetic algorithm for the solution. Some parameters in the algorithm step are improved to suit the needs of this problem.
(3) Compared to the other methods, such as the no offloading method, full offloading to cloud method, and full offloading to cloudlet method, extensive experiments and simulations have shown that our proposed method is effective and can provide optimization offloading strategy for MUs.
The abbreviations in this paper are shown in Table 1. The remainder of this paper is recognized as follows. In Section 2, system model and problem formulation are introduced. Section 3 elaborates multi-objective computation offloading algorithm for WAs (MCOWA). Section 4 presents the comparison analysis and performance evaluation. Section 5 summarizes the related work, and Section 6 concludes the paper and describes the future work.
Table 1
The abbreviations in this paper
Local area network
LAN
Multi-objective computation offloading method for workflow application
MCOWA
Mobile device(s)
MD(s)
Mobile edge computing
MEC
Mobile user(s)
MU(s)
Non-dominated sorting genetic algorithm II
NSGA-II
Quality of service
QoS
Virtual machine(s)
VMs
Workflow application(s)
WA(s)
Wide area network
WAN

2 System model and problem formulation

In this section, system model and problem formulation are presented. The basic architecture of MEC is described firstly. And then the basic mode is introduced. In addition, the time consumption mode, the energy consumption mode, and the cost mode are described.
The complete organization of MEC is shown in Fig. 1. Cloud is a combination of data centers. MD could be mobile phone or tablet. Each MD have one or more WAs which need to be processed. In general, these applications are time constrained. These applications can be executed directly locally, and users can migrate a part of the application or full application to cloudlet via local area network (LAN) or cloud via wide area network (WAN) according to their needs, to reduce user execution time or energy consumption or both of them.

2.1 Basic mode

As shown in Fig. 2, the WA is modeled by a direct acyclic graph Gf(V,E), where f represents the fth WA(1≤fF) and F represents the total number of the WAs. Each application consists of multiple tasks and any node in Fig. 2 can be seen as a task. V={v1,f,v2,f,…,vN,f} represents the set of tasks, and edge in E represents the set of dependency between any two tasks. Each edge is associated with a weight di,j which represents the size of data transmission from task vi,f to task vj,f. Cloudlet is configured as multiple virtual machines(VMs) for concurrent processing the WAs, which is modeled by a 3-tuple and denoted as CIT=(M,fcl,LLAN). It is assumed that the capacity of cloudlet equals to the number of VMs in the cloudlet, thus M is the number of VMs in the cloudlet, fcl is the processing capacity of the cloudlet and LLANis the transmission latency in LAN. Each task vi,f inV is modeled as a 2-tuple vi,f=(wi,f,si,f), where wi,f is the average number of instructions of task vi,f, and si,f is the offloading strategy for task vi,f which can be expressed as a one-dimensional vector S={si,f|i=1,2,…,Nf,f=1,2,…,F}, where Nfrepresents the number of the tasks in the fth WA and si,f=0 represents the task vi,f is processed locally, si,f=1 represents vi,fis offloaded to the cloudlet. Similarly, vi,f=2 represents vi,f is offloaded to the cloud.

2.2 Time consumption mode

The total time consumption mainly contains three aspects, namely, waiting time, processing time, and transmission time.

2.2.1 Average waiting time

It is assumed that the interval of the task arrival time obeys the negative exponential distribution of the parameter λ, and the service time of the cloudlet is subjected to the negative exponential distribution of the parameter μ. Based on the queuing theory [39], the waiting time mode can be established. The probability that the cloudlet in idle is expressed as
$$ p_{0}=\left[\sum\limits_{n=0}^{M-1}+\frac{\rho^{M}}{M!(1-\rho_{M})}\right]^{-1} $$
(1)
where \(\rho =\frac {\lambda }{\mu }\) indicates the utilization of cloudlets and \(\rho _{M}\,=\,\frac {\rho }{M}\). Let pn be the probability that the queue size reaches when the cloudlet is running in a steady state. Then, pn is given as
$$ p_{n}=\left\{ \begin{aligned} \frac{p^{n}}{M!M^{n-M}}\cdot{p_{0}} & & {n\geq M}\\ \frac{p^{n}}{n!}\cdot{p_{0}} & & {n<M} \end{aligned} \right. $$
(2)
When nM, the probability of the tasks waiting in the cloudlet is given as
$$ C_{w}(M,\rho)=\sum\limits_{n=M}^{\infty}p_{n}=\frac{\rho^{M}}{M!(1-\rho_{M})}\cdot p_{0} $$
(3)
Based on the little theorem, the average waiting queue length Lq and the average queue length LM are given as
$$ L_{q}=\sum\limits_{n=M+1}^{\infty}(n-M)p_{n}=\frac{p_{n}\cdot\rho^{M}}{M!}\cdot\sum\limits_{n=M}^{\infty}(n-M)\rho^{n-M}_{M} $$
(4)
$$ L_{M}=L_{q}+\rho $$
(5)
Overall, the average waiting time for tasks in the cloudlet is given as
$$ W_{q}=\frac{L_{M}}{\lambda}-\frac{1}{\mu}=\frac{1}{M\cdot\mu-\lambda}C_{w}(M,\rho) $$
(6)

2.2.2 Processing time and transmission time

It is assumed that MD has a single CPU with processing capacity fl, and the processing capacity of cloud is denoted as fc. In addition, the processing capacity of each VMs in cloudlet is denoted as fcl. If si,f=0, the local processing time for the task vi,f is given as
$$ T_{\text{pro}}(v_{i,f})=\frac{w_{i,f}}{f_{l}} $$
(7)
If si,f=1, the processing time of the task vi,f is given as
$$ T_{\text{pro}}(v_{i,f})=W_{q}+\frac{w_{i,f}}{f_{cl}}+L_{\text{LAN}} $$
(8)
where LLAN represents the transmission delay between the MD and the cloudlet. If si,f=2, the processing time of the task vi,f is descried as
$$ T_{\text{pro}}(v_{i,f})=\frac{w_{i,f}}{f_{c}}+L_{\text{WAN}} $$
(9)
where LWAN represents the transmission delay between the MD and the cloud. Let R denote the data rate of the wireless communication between the MD and the cloudlet or cloud. The computation time of data transmitted from the task vi,f to vj,f is given as
$$ T_{\text{trans}}(v_{i,f},v_{j,f})=\frac{d_{i,j}}{R} $$
(10)
It is assumed that if task vi,f and task vj,f are processed at the same side, si,f=sj,f,Ttrans=0. If (si,f,sj,f)∈{(0,1),(1,0)}, it means the data is communicated between MD and cloudlet through LAN with the rate Rcl. If (si,f,sj,f)∈{(0,2),(2,0)}, it means the data is communicated between MD and cloud through WAN with the rate Rc. If (si,f,sj,f)∈{(1,2),(2,1)}, it means the data is transmitted between cloudlet and cloud and the transmission time can be ignored, namely Ttrans=0. Therefore, the total time consumption of the fth WA which includes the average waiting time, the processing time and transmission time is given as follows.
$$ T_{wa,f}(S)=\sum\limits_{i=1}^{N}T_{\text{pro}}(v_{i,f})+\sum\limits_{i=1}^{j-1}\sum\limits_{j}^{N}T_{\text{trans}}(v_{i,f},v_{j,f}) $$
(11)

2.3 Energy consumption model

The total energy consumption of WA consists of the energy consumption of processing and transmission. Epro(vi,f) indicates the energy consumption generated during the processing of the task vi,f, while Etrans(vi,f,vj,f) represents the energy consumption generated by the data transmission from the task vi,f to the task vj,f on MDs. The formulation is given as
$$ E_{\text{pro}}(v_{i,f})=\left\{ \begin{aligned} \frac{w_{i,f}}{f_{l}}\cdot{p_{A}} & & {s_{i,f}=0}\\ \left(W_{q}+\frac{w_{i,f}}{f_{cl}}+L_{\text{LAN}}\right)\cdot{p_{I}} & & {s_{i,f}=1}\\ \left(\frac{w_{i,f}}{f_{c}}+L_{\text{LAN}}\right)\cdot{p_{I}} & & {s_{i,f}=2} \end{aligned} \right. $$
(12)
where pA and pI, respectively, represent the power when the MD is in the active state and idle state. The transmission energy consumption from task vi,f to task vj,f is described as
$$ E_{\text{trans}}(v_{i,f},v_{j,f})=\frac{d_{i,j}}{B}\cdot{p_{T}} $$
(13)
where, pT re presents the transmitted power of the MD. Therefore, the total energy consumption of the fth WA is given as
$$ E_{wa,f}(S)=\sum\limits_{v_{i,f}\in {V}}E_{\text{pro}}(v_{i,f})+\sum\limits_{i=1}^{j-1}\sum\limits_{j}^{N}E_{\text{trans}}(v_{i,f},v_{j,f}) $$
(14)

2.4 Cost mode

Additionally, the MU has to pay for the resources it used in the cloudlet or the cloud. It is assumed that the per price for the cloudlet is a and the remote cloud is 2a. The expression means that if the user’s task is processed locally and the cost is 0. If the task is offloaded to the cloudlet for processing, the cost will be a. Similarly, if the task is offloaded to the cloud, the cost will be 2a. The average cost of WA is given by Eq. (16), where N is the total nodes (tasks) of a WA.
$$ C=\left\{ \begin{aligned} 0 & & {s_{i,f}=0}\\ a & & {s_{i,f}=1}\\ 2a & & {s_{i,f}=2} \end{aligned} \right. $$
(15)
$$ E_{wa,f}(S)=\frac{1}{N}\cdot\sum\limits_{v_{i,f}\in {V}}C $$
(16)

2.5 Problem formulation

The object in this study is to optimize the energy consumption and time consumption, as well as cost of all WAs while meeting the deadline constraint given by WAs. It can be formulated as follows
$$ Min \ {T_{wa,f}(S)},\forall f \in \{1,2\ldots{F}\} $$
(17)
$$ Min \ {E_{wa,f}(S)},\forall f \in \{ 1,2\ldots{F}\} $$
(18)
$$ Min \ {C_{wa,f}(S)},\forall f \in \{1,2\ldots{F}\} $$
(19)
$$ s.t. \ {T_{wa,f}(S)}\leq D(f),\forall f \in \{ 1,2\ldots{F} \} $$
(20)
$$ s_{i}\in \{0,1,2\} $$
(21)
where D(f) represents the deadline of the fth WA which is given in advance.

3 Multi-objective computation offloading algorithm for workflow applications (MCOWA)

In this section, the details of MCOWA are described. We firstly introduce the main steps of the MCOWA in Section 3.1, and then the description and the corresponding algorithm pseudocode of MCOWA are shown in Section 3.2.

3.1 The main steps of mCOWA

In this paper, our objective is to optimize the energy consumption, time consumption, and cost for multi-WA. The computation offloading problem is defined as a multi-objective problem. NSGA-II [40] is used to obtain the optimal computation offloading strategies for all WAs.
Compared to the traditional genetic algorithm, NSGA-II can find the global optimal solutions among the feasible solutions quickly and accurately. The implementation of NSGA-II is shown in Fig. 3, which consists of five steps. Notice that the detail implementation of step 4 is shown in Algorithm 1 and we also introduce how it will be called by Algorithm 2 in Section 3.2. The details of each step are shown as follows.

3.1.1 Step 1: Encoding

The WA is numbered by using the results of a topological with an integer index and starts from {0,1,…,}. The gene denotes the value of each decision variable and also represents the offloading strategy of each task of WA. Multiple genes form a complete chromosome which can also be seen as an individual, representing one solution to the optimization problem. Numbers of individuals form a population, which indicates the diversity of solution. Each chromosome represents a computational offloading strategy for F WAs. The integer coding method is used, namely, each offloading strategy is encoded as {0,1,2}. The number 0 indicates that each task of WA is processed by MD itself and the number 1 represents the task of WA is offloaded to the cloudlet. Similarly, the number 2 represents the task of WA is offloaded to the cloud on the basis of the offloading strategies.
As shown in the Fig. 4, an example of encoding is given. Each box on the lower chromosome represents a gene and also indicates a task of the WA. The possible value of each gene is {0,1,2} and denoted as vi=0vi=1 o vi=2, which is represented above the gene. In addition, the gene with the same color means that they have the same offloading strategy.

3.1.2 Step 2: Fitness functions and constraints

Fitness function is the criterion for evaluating individual quality, which is given by Eqs. (17), (18), and (19). The three fitness functions of the computing offloading model (17), (18), (19) represent the time consumption, the energy consumption and the cost, respectively. It is necessary to make tradeoffs among the multiple objective functions. Namely, we need to obtain the best offloading strategy to make these three fitness functions well.
Additionally, for each WA, the completion time is obtained according to the computation offloading strategy. If the time constraint of formula (20) is not satisfied, the chromosome represented by the offloading strategy will not be considered in the selection process. The chromosome that satisfies the time constraint is called the valid chromosome.

3.1.3 Step 3: Initialization, selection, crossover, mutation

To generate initialized population P0 randomly and then the binary tournament selection, crossover and mutation are performed on P0 to obtain new population Q0.
The crossover operation is to cross the corresponding genes of two individuals and select two points on chromosome based on the gene fragments which enhance the adaptability, i.e., crossover point, to exchange the middle part of gene value vi. An example of crossover operation is shown in Fig. 5.
A mutation operation based on enhancing population adaptability is proposed in MCOWA. Each gene value of each individual is mutated with the mutation probability, which is given in advance. And the gene fragment with an added value of 1 is mutated. An example of mutation is given in Fig. 6.

3.1.4 Step 4: Non-dominated sort

Form a new group population \(R_{t}=P_{t}\bigcup Q_{t}\), where t=0. Additionally, the non-dominated front-end F1,F2,… is obtained by non-dominated sort of Rt.
The main steps are shown as follows [40, 41]
(1) The parameters are initialed, and the population size SCALE is determined at the same time. In addition, the attribute of the individual chromosomes dominated the number of those in the tagged population is set, if dominated=0, then the individual chromosome set the individual dominated empty.
(2) An individual chromosome is selected sequentially in the population, which is compared to other individuals in the population based on the dominance relation. If the compared individual dominate the selected individual chromosome, let dominated=dominated+1; if the compared individual is dominated by the selected individual, add the compared individual to the individual set which is dominated by the chromosome.
(3) Repeat (2) until processing the dominated attribute of the N chromosomes and their dominated individual set.
(4) The population is traversed and the chromosome whose dominated attribute is 0 is added to the rank 1.
(5) The individual chromosome in the rank established is selected sequentially in (2), meanwhile, all of the individuals’ attribute in the set of individuals is operated by auto-decrement, and thus, dominated=dominated−1. If dominated=0, add the individual to the rank of next level.
(6) Repeat (5) until it is empty that the dominated individual set of the individual chromosome is.
The pseudo-code of the fast non-dominated sorting approach is shown in Algorithm 1.
https://static-content.springer.com/image/art%3A10.1186%2Fs13638-019-1526-x/MediaObjects/13638_2019_1526_Figa_HTML.png

3.1.5 Step 5: Crowding distance calculation

All Fi are sorted according to the crowding distance comparison operation ≺n, and the best N individuals are selected to form a population Pt+1. The congestion distance formula is shown in (22)
$$ \begin{aligned} i_{d}=i_{d}^{T}+i_{d}^{E}+i_{d}^{C}=\sum\limits_{f=1}^{F}\left(|T_{wa,f}^{i+1}(S)-T_{wa,f}^{i-1}(S)|\right)\\ +\sum\limits_{f=1}^{F}\left(|E_{wa,f}^{i+1}(S)-E_{wa,f}^{i-1}(S)|\right)\\ \sum\limits_{f=1}^{F}\left(|C_{wa,f}^{i+1}(S)-C_{wa,f}^{i-1}(S)|\right) \end{aligned} $$
(22)
where id represents the crowding distance of the its offloading strategy, si,f represents the fth WA. \(i_{d}^{T}, i_{d}^{E}\), and \(i_{d}^{C}\) represent the objective functions, respectively. \(T_{wa,f}^{i+1}(S)\) represents the value of the offloading strategy si+1,f to the objective function Twa,f(S). In addition, \(E_{wa,f}^{i+1}(S)\) represents the value of the offloading strategy si+1,f to the objective function Ewa,f(S). Similarly, \(C_{wa,f}^{i+1}(S)\) represents the value of the offloading strategy si+1,f to the objective function Cwa,f(S).
The population Pt+1 is subjected to replication, crossover, and mutation operations to form a population Qt+1. If the termination condition is true (the maximum number of iterations is met), then it ends. Otherwise, t=t+1 and goes to Step 2.

3.2 MCOWA pseudocode

The pseudocode of MCOWA method is shown in Algorithm 2. The input of the Algorithm 2 are multiple WA. The algorithm starts from the first iteration (Line 1). Firstly, the population is initialized, the chromosomes whose complete time does not satisfy the deadline constraint are removed from the population, and the new generation is denoted as \(P_{t}^{'} \) (Line 3 to 8). Two populations \(P_{t}^{'}\) and Ot of size N are randomly generated and form a population Rt with a population size of 2N (Line 9 to 10). The population Rt is divided into multiple non-dominated layers by calling Algorithm 1(Line 11). F is prepared for the selection operation, and population Pt+1 is set to empty and store the new generation of the population (Line 12). Additionally, the excellent individuals are selected to fill in a new population of size N according to crowding distance (Line 13 to 18). Then the offspring population is generated after the crossover and mutation and put into Qt+1 (Line 19). The offspring population is merged with the parent population and iterated again until the algorithm ends (Line 2 to 22). Finally, the optimal offloading strategies, the time consumption, energy consumption, and cost of all WAs are output (Line 23).
https://static-content.springer.com/image/art%3A10.1186%2Fs13638-019-1526-x/MediaObjects/13638_2019_1526_Figb_HTML.png

4 Experimental evaluation

In this section, a comprehensive simulation and experiment are carried out to evaluate the performance of the proposed MCOWA method. Specifically, the simulation setup is introduced firstly, including the experimental parameter settings and other comparative methods. Then, the influence of different WA scales on the energy consumption performance, time consumption performance, and cost performance of the compared methods and MCOWA is evaluated.

4.1 Experimental settings

In order to make a comparative analysis, we propose some other computation offloading methods in addition to our MCOWA method. The comparative methods are introduced as follows.
No offloading (NO): All tasks of a WA are processed on the MD. There is no transmit overhead between any two tasks. In addition, there is no cost of using resources of cloudlet of cloud, named as NO.
Full offloading to cloud (FOC): All computation tasks of WAs are moved from the local MD to the remote cloud for processing, named as FOC.
Full offloading to cloudlet (FOCL): All computation tasks of WAs are moved from the local MD to the cloudlet for processing, named as FOCL.
MCOWA: With the help of the MCOWA, all tasks are partitioned into three sets, one for local processing on the MD and another for remote processing on cloud, and the other for cloudlet processing.
We use the same parameter settings with reference [42] and the value of some new parameters are presented. The details are shown in Table 2. The methods are implemented base on JAVA language by using the tool of Eclipse on a PC machine with 2 Intel Core i5-5200U 2.20GHz processors and 4GB RAM. The operating system is Win7 64.
Table 2
Parameter settings
Parameter
Value
The power of MDs when CPU is idle state
0.001 W
The power of MDs when CPU is active state
0.5 W
The transmission power of MDs
0.1 W
The processing capacity of MDs
500 MHZ
The processing capacity of the cloudlet
2000 MHZ
The processing capacity of the cloud
3000 MHZ
The latency of LAN
1 ms
The latency of WAN
30 ms
The bandwidth of LAN
100 kb/s
The bandwidth of WAN
50 kb/s
The average waiting time of tasks in the cloudlet
20 ms
The cost of cloudlet for each task
2
The cost of cloud for each task
4

4.2 Performance evaluation

We have received different results under the different parameters of WAs number. Fifty experiments are performed in the case of convergence for each WA scale. Firstly, we discuss how MCOWA balance the three objectives.
As shown in Fig. 7, we can see that MCOWA is effective for two mobile users. Similary, it also can be used for the scenario of three users and more users based on the experimental results which are shown in Figs. 8, 9, 10, 11, and 12. More specifically, we can conclude that each user can obtain the best results from the perspective of energy consumption and time consumption.
We can conclude that MCOWA is effective with the increasing of number of WAs. More specially, a WA consists of 13 tasks. So two WAs are 26 tasks and so on. Additionally, as shown in Figs. 7, 8, 9, 10, 11, and 12, compared to FOC and FOCL, MCOWA has a smaller cost. If we only consider the cost factor, there is no cost for local processing and it seems that MCOWA is not better than NO. However, MCOWA minimizes the time consumption and energy consumption of WA while ensures the cost is within a certain acceptable range. Overall, MCOWA is effective as the offloading strategy make these three objects better, not just to make one of them work best.
Secondly, we discuss how MCOWA provide effective strategy to balance three offloading destinations, namely local, cloudlet, and cloud. As shown in Tables 3, 4, 5, 6, 7, and 8, as the cost is in an acceptable range, the task is mainly offloaded to the cloudlet. That means the cloudlet is the optimal offloading destination. If the number of tasks exceeds the processing capacity of the cloudlet, some tasks will be offloaded to the cloud in order to reduce the queue latency while meeting the deadline constraint of task. In addition, as the number of WAs increases, the number of corresponding tasks increases. The competition for computing resources of cloudlet among WAs will be more intense as cloudlet is still the first choice to offload. Considering the limitations of cloudlet resources and the cost processing tasks in cloud, the number of tasks which are executed locally will increase and the number of tasks which are offloaded to the cloud will be decreased, which also proves that our method strategy is effective and can balance each offloading destinations well.
Table 3
The number of WAs=2
Location
NO
FOCL
FOC
MCOWA
Local
26
0
0
0
Cloudlet
0
26
0
24
Cloud
0
0
26
2
Table 4
The number of WAs=3
Location
NO
FOCL
FOC
MCOWA
Local
39
0
0
2
Cloudlet
0
39
0
35
Cloud
0
0
39
2
Table 5
The number of WAs=4
Location
NO
FOCL
FOC
MCOWA
Local
52
0
0
4
Cloudlet
0
52
0
47
Cloud
0
0
52
1
Table 6
The number of WAs=5
Location
NO
FOCL
FOC
MCOWA
Local
65
0
0
7
Cloudlet
0
65
0
56
Cloud
0
0
65
2
Table 7
The number of WAs=6
Location
NO
FOCL
FOC
MCOWA
Local
78
0
0
11
Cloudlet
0
78
0
66
Cloud
0
0
78
1
Table 8
The number of WAs=7
Location
NO
FOCL
FOC
MCOWA
Local
91
0
0
14
Cloudlet
0
91
0
74
Cloud
0
0
91
3
MCC brings new services and facilities to MUs to take full advantage of cloud computing. However, the remote cloud is usually located far away from the MUs, which may result in high network latency. This inevitably reduces QoS of MUs. In addition, MUs have to pay for the resources of cloud they use. MEC is a new paradigm can be seen as a example of MEC. Different from the MCC, MEC is a three-layer architecture. The edge server is a bridge which well connects the MU and cloud. MU can connect to edge server or cloud according to requirement of service. There are many kinds of edge servers [43]. Cloudlet is a type of edge server, which has been widely used in LAN and WMAN [2527, 4446]. Cloudlet is a low-cost infrastructure with rich computer resources, high bandwidth, and sufficient power. With the help of cloudlet, MU can improve QoS by computation offloading.
Computation offloading was originally studied in MCC [2833]. The offloading mode in MEC is similar with the mode in MCC, but the main difference is the location of offloading. It is commonly assumed that the offloading destination of computation offloading for MCC is the remote cloud, while the offloading destination of MEC is a edge server such as cloudlet. Different from the cloudlet, it is assumed that the resources in remote cloud are unlimited. Besides, MCC can be seen as a two-layer architecture, while MEC can be seen as a three-layer architecture.
Jia et al. [28] made a thorough study on how to divide and migrate applications in MCC by using heuristic algorithm. The relationship between tasks in the WA is abstracted into serial and parallel, and the general WA is seen as a combination of the two. The core idea is to reduce the total processing latency of the task by increasing the parallelism between the local and the cloud. According to the unstable wireless channel and unstable service node in the MCC, Wu et al. [29] proposed a min-cost offload partitioning algorithm to find the best partitioning plan and minimize processing time and energy consumption. In view of the sequence of task processing in the WA, dynamic voltage and frequency scaling is used to set a number of flag bits to construct a joint optimization function of latency and energy consumption in [30]. The task scheduling strategy based on simulated annealing algorithm is proposed to optimize the processing time consumption and energy consumption of the WA.
There are many studies on WA schedule in MCC. Deng et al. [31] propose a novel offloading system to design robust offloading decisions for mobile services. The dependency relations among component services are taken into consideration. The objectives of them are to optimize execution time and energy consumption of mobile services. Xu et al. [32] propose an energy consumption model for applications deployed across cloud computing platforms, and a corresponding energy-aware resource allocation algorithm for VMs scheduling to accomplish scientific workflow executions. Aiming at the problem of scientific WA scheduling with deadline constraints in multi-cloud environment, an adaptive discrete particle swarm optimization algorithm is proposed in [33], which can reduce the processing cost of WA while meeting the deadline of WA. As MEC and MCC have different architectures, the computation offloading methods in MCC cannot be used for the MEC scenario directly.
Jia et al. [34] proposed a computational offloading algorithm for augmented reality applications in MEC environment. They hold the opinion that such applications are multi-user participation and have high latency requirements. They have established a multi-user augmented reality game system model and proposed a corresponding multi-user computing offloading algorithm. Li et al. [35] proposed a migration algorithm that divides the application into multiple parts and migrates them to multiple cloudlets to minimize task computation latency. However, their methods mainly focus on latency optimization. Liu et al. [36] utilize a queuing theory to study on the energy consumption, processing latency, and price cost of offloading process in MEC system. The secularization scheme and interior point method are used to address the formulated problem. They are mainly for general applications in the MEC and do not consider the computation of WAs in the MEC.
Zhang et al. [47] propose an energy-efficient offloading strategy for home automation applications in MEC. An improved particle swarm optimization algorithm is implemented to schedule mobile services which minimizes the total energy consumption of the WAs within a given constant deadline. However, their approach focuses on the single-user and single-objective optimization scenario. Huang et al. [48] proposed a computation offloading method for multimedia workflows with deadline constraints in cloudlet-based mobile cloud. The objective of them is to minimize the energy consumption with the constraints of meeting the deadline of each multimedia workflow. In addition, a multi-objective computation offloading method for WA is proposed in terms of energy consumption and time consumption [42].
Different from the existing research, we investigate the multi-objective computation offloading for WAs in terms of time consumption, energy consumption, and cost for WAs in MEC.

6 Conclusion

In this paper, we investigate the multi-objective computation offloading method for WAs in MEC. To tackle the problem, we have proposed a computation-offloading algorithm (MCOWA) that finds the optimal application strategy while meeting the deadline-constrained of WAs. Extensive experimental evaluations have conducted to show the efficiency and effectiveness of our proposed method.
In future work, we will focus on multi-objective optimization computation offloading from the perspective of edge servers in MEC. For one thing, the computation offloading for WAs in multi-cloudlet scenario will be studied [49, 50]. For another, the revenue of the edge server will be investigated [51].

Acknowledgements

This work is supported by The Natural Science Foundation of Fujian Province (Grant No.2018J05106), the Education and Scientific Research Projects of Young and Middle-aged Teachers in Fujian Province (JZ160084), and the Scientific Research Foundation of Huaqiao University under Grant No. 14BS316. China Scholarship Council (CSC) awarded Kai Peng’s 1-year research abroad at the University of British Columbia.

Competing interests

The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
4.
go back to reference L. Qi, R. Wang, C. Hu, S. Li, Q. He, X. Xu, Time-aware distributed service recommendation with privacy-preservation. Inf. Sci.480:, 354–364 (2019).CrossRef L. Qi, R. Wang, C. Hu, S. Li, Q. He, X. Xu, Time-aware distributed service recommendation with privacy-preservation. Inf. Sci.480:, 354–364 (2019).CrossRef
5.
go back to reference K. Peng, V. C. Leung, Q. Huang, Clustering approach based on mini batch Kmeans for intrusion detection system over big data. IEEE Access. 6:, 11897–11906 (2018).CrossRef K. Peng, V. C. Leung, Q. Huang, Clustering approach based on mini batch Kmeans for intrusion detection system over big data. IEEE Access. 6:, 11897–11906 (2018).CrossRef
7.
go back to reference X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, L. Qi, A computation offloading method over big data for IoT-enabled cloud-edge computing. Futur. Gener. Comput. Syst.95:, 522–533 (2019).CrossRef X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, L. Qi, A computation offloading method over big data for IoT-enabled cloud-edge computing. Futur. Gener. Comput. Syst.95:, 522–533 (2019).CrossRef
10.
go back to reference X. Xu, S. Fu, L. Qi, X. Zhang, Q. Liu, Q. He, S. Li, An IoT-Oriented data placement method with privacy preservation in cloud environment. J. Netw. Comput. Appl.124:, 148–157 (2018).CrossRef X. Xu, S. Fu, L. Qi, X. Zhang, Q. Liu, Q. He, S. Li, An IoT-Oriented data placement method with privacy preservation in cloud environment. J. Netw. Comput. Appl.124:, 148–157 (2018).CrossRef
11.
go back to reference L Qi, X Zhang, W Dou, Q Ni, A distributed locality-sensitive hashing-based approach for cloud service recommendation from multi-source data. IEEE J. Sel. Areas Commun.35(11), 2616–2624 (2017).CrossRef L Qi, X Zhang, W Dou, Q Ni, A distributed locality-sensitive hashing-based approach for cloud service recommendation from multi-source data. IEEE J. Sel. Areas Commun.35(11), 2616–2624 (2017).CrossRef
13.
go back to reference X. Xu, Y. Xue, L. Qi, Y. Yuan, X. Zhang, T. Umer, S. Wan, An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Futur. Gener. Comput. Syst.96:, 89–100 (2019).CrossRef X. Xu, Y. Xue, L. Qi, Y. Yuan, X. Zhang, T. Umer, S. Wan, An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Futur. Gener. Comput. Syst.96:, 89–100 (2019).CrossRef
16.
go back to reference K. Wang, H. Yin, W. Quan, G. Min, Enabling collaborative edge computing for software defined vehicular networks. IEEE Netw.32(5), 112–117 (2018).CrossRef K. Wang, H. Yin, W. Quan, G. Min, Enabling collaborative edge computing for software defined vehicular networks. IEEE Netw.32(5), 112–117 (2018).CrossRef
17.
go back to reference X. Xu, Y. Li, T. Huang, Y. Xue, K. Peng, L. Qi, W. Dou, An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks. J. Netw. Comput. Appl.133:, 75–85 (2019).CrossRef X. Xu, Y. Li, T. Huang, Y. Xue, K. Peng, L. Qi, W. Dou, An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks. J. Netw. Comput. Appl.133:, 75–85 (2019).CrossRef
21.
go back to reference W. Shi, H. Sun, J. Cao, Q. Zhang, W. Liu, Edge computing: An emerging computing model for the internet of everything era. J. Comput. Res. Dev.54(5), 907–924 (2017). W. Shi, H. Sun, J. Cao, Q. Zhang, W. Liu, Edge computing: An emerging computing model for the internet of everything era. J. Comput. Res. Dev.54(5), 907–924 (2017).
22.
go back to reference P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor.19(3), 1628–1656 (2017).CrossRef P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor.19(3), 1628–1656 (2017).CrossRef
25.
go back to reference M. Satyanarayanan, P. Bahl, R. Caceres, N. Davies, The case for vm-based cloudlets in mobile computing. IEEE Pervasive Comput.4:, 14–23 (2009).CrossRef M. Satyanarayanan, P. Bahl, R. Caceres, N. Davies, The case for vm-based cloudlets in mobile computing. IEEE Pervasive Comput.4:, 14–23 (2009).CrossRef
26.
go back to reference M. Satyanarayanan, G. Lewis, E. Morris, S. Simanta, J. Boleng, K. Ha, The role of cloudlets in hostile environments. IEEE Pervasive Comput.12(4), 40–49 (2013).CrossRef M. Satyanarayanan, G. Lewis, E. Morris, S. Simanta, J. Boleng, K. Ha, The role of cloudlets in hostile environments. IEEE Pervasive Comput.12(4), 40–49 (2013).CrossRef
27.
go back to reference M Satyanarayanan, Z Chen, K Ha, W Hu, W Richter, P Pillai, in 6th International Conference on Mobile Computing, Applications and Services. Cloudlets: at the leading edge of mobile-cloud convergence (IEEEAustin, 2014), pp. 1–9. M Satyanarayanan, Z Chen, K Ha, W Hu, W Richter, P Pillai, in 6th International Conference on Mobile Computing, Applications and Services. Cloudlets: at the leading edge of mobile-cloud convergence (IEEEAustin, 2014), pp. 1–9.
28.
go back to reference M. Jia, J. Cao, L. Yang, Heuristic Offloading of Concurrent Tasks for Computation-Intensive Application in Mobile Cloud Computing. 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) (IEEE, Toronto, 2014). M. Jia, J. Cao, L. Yang, Heuristic Offloading of Concurrent Tasks for Computation-Intensive Application in Mobile Cloud Computing. 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) (IEEE, Toronto, 2014).
29.
go back to reference H. Wu, W. Knottenbelt, K. Wolter, Y. Sun, An Optimal Offloading Partitioning Algorithm in Mobile Cloud Computing. Evaluation of Systems (Springer, Cham, 2016). H. Wu, W. Knottenbelt, K. Wolter, Y. Sun, An Optimal Offloading Partitioning Algorithm in Mobile Cloud Computing. Evaluation of Systems (Springer, Cham, 2016).
30.
go back to reference H. Hu, R. Liu, Hu H., Multi-objective optimization for task scheduling in mobile cloud computing. J. Comput. Res. Dev.54(9), 1909–1919 (2017). H. Hu, R. Liu, Hu H., Multi-objective optimization for task scheduling in mobile cloud computing. J. Comput. Res. Dev.54(9), 1909–1919 (2017).
31.
go back to reference S. Deng, L. Huang, J. Taheri, A. Y. Zomaya, Computation offloading for service workflow in mobile cloud computing. IEEE Trans. Parallel Distrib. Syst.26(12), 3317–3329 (2014).CrossRef S. Deng, L. Huang, J. Taheri, A. Y. Zomaya, Computation offloading for service workflow in mobile cloud computing. IEEE Trans. Parallel Distrib. Syst.26(12), 3317–3329 (2014).CrossRef
32.
go back to reference X. Xu, W. Dou, X. Zhang, J. Chen, EnReal: An energy-aware resource allocation method for scientific workflow executions in cloud environment. IEEE Trans. Cloud Comput.4(2), 166–179 (2015).CrossRef X. Xu, W. Dou, X. Zhang, J. Chen, EnReal: An energy-aware resource allocation method for scientific workflow executions in cloud environment. IEEE Trans. Cloud Comput.4(2), 166–179 (2015).CrossRef
33.
go back to reference B. Lin, W. Guo, G. Chen, Scheduling strategy for science workflow with deadline constraint on multi-cloud. J. Commun.39(1), 56–69 (2018). B. Lin, W. Guo, G. Chen, Scheduling strategy for science workflow with deadline constraint on multi-cloud. J. Commun.39(1), 56–69 (2018).
34.
go back to reference M. Jia, W. Liang, Delay-Sensitive Multiplayer Augmented Reality Game Planning in Mobile Edge Computing. Proceedings of the 21st ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (ACM, Montreal, 2018). M. Jia, W. Liang, Delay-Sensitive Multiplayer Augmented Reality Game Planning in Mobile Edge Computing. Proceedings of the 21st ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (ACM, Montreal, 2018).
35.
go back to reference B. Li, M. He, W. Wu, A. K. Sangaiah, G. Jeon, Computation offloading algorithm for arbitrarily divisible applications in mobile edge computing environments: An OCR case. Sustainability. 10(17), 196–210 (2018).CrossRef B. Li, M. He, W. Wu, A. K. Sangaiah, G. Jeon, Computation offloading algorithm for arbitrarily divisible applications in mobile edge computing environments: An OCR case. Sustainability. 10(17), 196–210 (2018).CrossRef
36.
go back to reference L. Liu, Z. Chang, X. Guo, T. Ristaniemi, Multi-objective optimization for computation offloading in mobile-edge computing. 2017 IEEE Symposium on Computers and Communications (ISCC) (IEEE, Heraklion, 2017). L. Liu, Z. Chang, X. Guo, T. Ristaniemi, Multi-objective optimization for computation offloading in mobile-edge computing. 2017 IEEE Symposium on Computers and Communications (ISCC) (IEEE, Heraklion, 2017).
37.
go back to reference X. Li, L. Qian, R. Ruiz, Cloud Workflow scheduling with deadlines and time slot availability. IEEE Trans. Serv. Comput.11(2), 329–340 (2018).CrossRef X. Li, L. Qian, R. Ruiz, Cloud Workflow scheduling with deadlines and time slot availability. IEEE Trans. Serv. Comput.11(2), 329–340 (2018).CrossRef
38.
go back to reference Z. Li, J. Ge, H. Hu, W. Song, H. Hu, B. Luo, Cost and energy aware scheduling algorithm for scientific workflows with deadline constraint in clouds. IEEE Trans. Serv. Comput.11(4), 713–726 (2018).CrossRef Z. Li, J. Ge, H. Hu, W. Song, H. Hu, B. Luo, Cost and energy aware scheduling algorithm for scientific workflows with deadline constraint in clouds. IEEE Trans. Serv. Comput.11(4), 713–726 (2018).CrossRef
39.
go back to reference J. Vilaplana, F. Solsona, I. Teixido, J. Mateo, F. Abella, J. Rius, A queuing theory model for cloud computing. J. Supercomput.69(1), 492–507 (2014).CrossRef J. Vilaplana, F. Solsona, I. Teixido, J. Mateo, F. Abella, J. Rius, A queuing theory model for cloud computing. J. Supercomput.69(1), 492–507 (2014).CrossRef
40.
go back to reference K. Deb, A. Pratap, S. Agarwal, T. A. M. T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput.6(2), 182–197 (2002).CrossRef K. Deb, A. Pratap, S. Agarwal, T. A. M. T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput.6(2), 182–197 (2002).CrossRef
41.
go back to reference Y. Sun, F. Lin, H. Xu, Multi-objective optimization of resource scheduling in Fog computing using an improved NSGA-II. Wirel. Pers. Commun.102(2), 1369–1385 (2018).CrossRef Y. Sun, F. Lin, H. Xu, Multi-objective optimization of resource scheduling in Fog computing using an improved NSGA-II. Wirel. Pers. Commun.102(2), 1369–1385 (2018).CrossRef
43.
go back to reference S. Wang, Y. Zhao, J. Xu, J. Yuan, C. Hsu, Edge server placement in mobile edge computing. J. Parallel Distrib. Comput.127:, 160–168 (2019).CrossRef S. Wang, Y. Zhao, J. Xu, J. Yuan, C. Hsu, Edge server placement in mobile edge computing. J. Parallel Distrib. Comput.127:, 160–168 (2019).CrossRef
45.
go back to reference Z. Pang, L. Sun, Z. Wang, E. Tian, S. Yang, A survey of cloudlet based mobile computing. 2015 International Conference on Cloud Computing and Big Data (CCBD) (IEEE, Shanghai, 2015). Z. Pang, L. Sun, Z. Wang, E. Tian, S. Yang, A survey of cloudlet based mobile computing. 2015 International Conference on Cloud Computing and Big Data (CCBD) (IEEE, Shanghai, 2015).
46.
go back to reference X. Xu, Y. Li, Y. Yuan, K. Peng, W. Yu, W. Dou, A. X. Liu, An Energy-Aware Virtual Machine Scheduling Method for Cloudlets in Wireless Metropolitan Area Networks. 2018 IEEE Cyber, Physical and Social Computing (CPSCom) (IEEE, Halifax, 2018). X. Xu, Y. Li, Y. Yuan, K. Peng, W. Yu, W. Dou, A. X. Liu, An Energy-Aware Virtual Machine Scheduling Method for Cloudlets in Wireless Metropolitan Area Networks. 2018 IEEE Cyber, Physical and Social Computing (CPSCom) (IEEE, Halifax, 2018).
47.
go back to reference J. Zhang, Z. Zhou, S. Li, L. Gan, X. Zhang, L. Qi, W. Dou, Hybrid computation offloading for smart home automation in mobile cloud computing. Pers. Ubiquit. Comput.22(1), 121–134 (2018).CrossRef J. Zhang, Z. Zhou, S. Li, L. Gan, X. Zhang, L. Qi, W. Dou, Hybrid computation offloading for smart home automation in mobile cloud computing. Pers. Ubiquit. Comput.22(1), 121–134 (2018).CrossRef
48.
go back to reference T. Huang, F. Ruan, S. Xue, L. Qi, Y. Duan, Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud. Wirel. Netw. 2019:, 1–15 (2019).CrossRef T. Huang, F. Ruan, S. Xue, L. Qi, Y. Duan, Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud. Wirel. Netw. 2019:, 1–15 (2019).CrossRef
49.
go back to reference D. G. Roy, D. De, A. Mukherjee, R. Buyya, Application-aware cloudlet selection for computation offloading in multi-cloudlet environment. J. Supercomput.73(4), 1672–1690 (2017).CrossRef D. G. Roy, D. De, A. Mukherjee, R. Buyya, Application-aware cloudlet selection for computation offloading in multi-cloudlet environment. J. Supercomput.73(4), 1672–1690 (2017).CrossRef
50.
go back to reference A Mukherjee, D De, D. G Roy, A power and latency aware cloudlet selection strategy for multi-cloudlet environment. IEEE Trans. Cloud Comput.7(1), 141–154 (2016).CrossRef A Mukherjee, D De, D. G Roy, A power and latency aware cloudlet selection strategy for multi-cloudlet environment. IEEE Trans. Cloud Comput.7(1), 141–154 (2016).CrossRef
51.
go back to reference X. Xu, Q. Cai, G. Zhang, J. Zhang, W. Tian, X. Zhang, A. X. Liu, An incentive mechanism for crowdsourcing markets with social welfare maximization in cloud-edge computing, Concurrency and Computation: Practice and Experience, e4961 (2018). https://doi.org/10.1002/cpe.4961. X. Xu, Q. Cai, G. Zhang, J. Zhang, W. Tian, X. Zhang, A. X. Liu, An incentive mechanism for crowdsourcing markets with social welfare maximization in cloud-edge computing, Concurrency and Computation: Practice and Experience, e4961 (2018). https://​doi.​org/​10.​1002/​cpe.​4961.
Metadata
Title
An energy- and cost-aware computation offloading method for workflow applications in mobile edge computing
Authors
Kai Peng
Maosheng Zhu
Yiwen Zhang
Lingxia Liu
Jie Zhang
Victor C.M. Leung
Lixin Zheng
Publication date
01-12-2019
Publisher
Springer International Publishing
DOI
https://doi.org/10.1186/s13638-019-1526-x

Other articles of this Issue 1/2019

EURASIP Journal on Wireless Communications and Networking 1/2019 Go to the issue

Premium Partner