Skip to main content
Erschienen in: Cluster Computing 2/2024

Open Access 06.06.2023

An efficient multi-objective scheduling algorithm based on spider monkey and ant colony optimization in cloud computing

verfasst von: Dina A. Amer, Gamal Attiya, Ibrahim Ziedan

Erschienen in: Cluster Computing | Ausgabe 2/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Due to easier access, improved performance, and lower costs, the use of cloud services has increased dramatically. However, cloud service providers are still looking for ways to complete users’ jobs at a high speed to increase profits and reduce energy consumption costs. To achieve such a goal, many algorithms for scheduling problem have been introduced. However, most techniques consider an objective in the scheduling process. This paper presents a new hybrid multi-objective algorithm, called SMO_ACO, for addressing the scheduling problem. The proposed SMO_ACO algorithm combines Spider Monkey Optimization (SMO) and Ant Colony Optimization (ACO) algorithm. Additionally, a fitness function is formulated to tackle 4 objectives of the scheduling problem. The proposed fitness function considers parameters like schedule length, execution cost, consumed energy, and resource utilization. The proposed algorithm is implemented using the Cloud Sim toolkit and evaluated for different workloads. The performance of the proposed technique is verified using several performance metrics and the results are compared with the most recent existing algorithms. The results prove that the proposed SMO_ACO approach allocates resources efficiently while maintaining cloud performance that increases profits.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Cloud computing provides companies with a high level of flexibility for the presented services. Cloud computing pools a massive number of resources to support a large variety of IT services through the Internet. Enterprises and individuals no longer need to pay a fortune to purchase expensive hardware. They can use computing power on the cloud with minimal management effort. Further, they can increase or decrease cloud resources based on predefined policies [1]. To increase cloud computing usage, cloud vendors tend to improve the Quality of Services (QoS) among different users. They are often taken into consideration distinct factors related to cloud services, such as cost, safety, throughput, accessibility, availability, and implementation time [2]. However, cloud computing faces many challenges in data security, workload balancing, resource allocation and scheduling, and power consumption [3].
Resource scheduling is one of the major problems in the cloud. It refers to the process of distributing tasks of a given application onto available Virtual Machines (VMs) in the cloud to achieve one or more objectives. Since the number of VMs is limited and has different capabilities, an efficient scheduling method is required to carefully schedule application tasks onto the VMs [4].
The scheduling problem is generally known as an NP-hard problem, due to the varying nature of the application requirements and the resource availability [5]. Therefore, many heuristics and meta-heuristics techniques were developed to tackle such problem with different objectives. In the early stage, many heuristics were presented to solve the scheduling problem [6, 7]. However, most heuristic strategies depend largely on certain underlining rules to improve the quality of solutions for large-sized problems [8]. Meta-heuristic techniques, on the other hand, have proved to be more effective in solving a wide variety of real-world complex optimization problems including scheduling problems [9, 10]. The most popular meta-heuristic algorithms used to solve scheduling problem in cloud computing are Genetic Algorithm (GA) [11], Particle Swarm Optimization (PSO) [12], and Ant Colony Optimization (ACO) [13]. However, they have some weaknesses; for example, tuning the control parameters may lead to falling in the local Optimum point, less convergence in the case of iterative conditions, and higher memory consumption. To cover these defects, an encouraging research direction hybridizes one or more heuristics and meta-heuristics to take advantage of these methods while minimizing their drawbacks.
This paper first presents a new optimization approach called Spider Monkey Optimization (SMO) and then proposes a hybrid approach combining the SMO and the ACO for addressing multi-objective scheduling problem. The SMO inspired by the social behavior of spider monkeys. The social structure of SMO is called the fusion-based social structure, whereby the group is first divided into sub-groups, and then is merged again to form a new group after certain criteria are met. The main advantage here is that it reduces direct competition among team members for forage. Creating new groups attracts innovative solutions to various subgroups, thus improving exploration capabilities. When the maximum number of groups is created and the solution is not changed, all subsets will be merged, thus balancing exploration and exploitation capabilities while maintaining convergence velocity. The SMO algorithm is used to solve many problems as feature selection [14], Traveling Salesman Problem (TSP) [15], thinning of concentric circular antenna arrays [16], energy storage units in the distributed system [17]. Further, the SMO is hybrid with Nelder–Mead method for global optimization [18].
The main motivations of this paper are in twofold: first, is to closely solve the multi-objective scheduling problem to identify the best resources for users’ cloudlets to improve the QoS parameters such as schedule length, execution cost, consuming energy, throughput, and a high degree of balance. This happens through assigning the user tasks onto the available virtual machines in an efficient way that in turn improves cloud resource utilization. Secondly, the integration between the SMO and ACO would provide effective processes to schedule cloud tasks by optimizing the execution cost, schedule length and consuming energy.
The main contributions of this paper are:
  • Formulating and presenting a multi-objective scheduling model for minimizing schedule length, execution cost, and consuming energy, and maximizing the resource utilization, throughput, and balance degree.
  • Improving the SMO through enhancing the search process in local leader decision phase then implementing a hybrid approach combining the improved SMO and ACO algorithm.
  • Evaluating the effect of different scenarios for schedule length, execution cost, and consuming energy of the cloud system to provide a clear evaluation for the proposed SMO_ACO algorithm.
The rest of the paper is organized as follows. Section 2 presents a literature survey of related work. In Sect. 3 problem statement and the multi-objective formulation are presented. Section 4 first introduces the basics of the SMO and the ACO algorithms in detail and then describes the implementation of proposed SMO_ACO. The experimental results and discussion are presented in Sect. 5. Section 6 presents the concluding remarks of this research work.
Recently, a massive number of optimization methods are developed to address the multi-objective scheduling problem. The author in [19] introduces a three-way clustering method into the cloud platform for improving the average task response time, resource utilization, and balances the cluster load simultaneously as well as reducing energy consumption. An improved version of the Whale optimization algorithm is presented in [20] to outdo the early convergence problem in the conventional Whale algorithm. The proposed algorithm is compared with the WOA, PSO, and bat algorithms in terms of reducing the average execution time, response time, and increasing the throughput. In [21], the goal was to reduce the service time and the power consumed by computing and non-computing resources by applying a rack-aware scheduling algorithm based on the genetic algorithm. In [22], the authors introduce a flexible scheduling technique to overcome the trade-off between the user's time requirements and the provider's energy consumption requirements. To do this, a tuning parameter is modified to either reduce time or reduce energy usage. Also, the authors in [23] offer a solution for multi-objective task scheduling problems in a cloud environment. Where the proposed multi-objective optimization algorithm was designed to optimize two significant objectives simultaneously: minimizing energy consumption and minimizing the schedule length.
Several algorithms have developed to minimize energy consumption such as task scheduling [24]. The main goal of this algorithm is to distribute the workload among all the available virtual machines with minimum schedule length that in turn enhances resource utilization and reduces energy consumption. In [25], the authors propose an energy-efficient hybrid (EEH) system to increase the performance of electrical energy usage in data centers. Simulation experiment results indicate the proposed framework's dominance over methods for reducing power consumption in terms of Energy Productivity Data Center (DCEP), Power Usage Effectiveness (PUE), throughput, the average time to execution, and cost-saving. The interconnection between three major parameters was clarified in [26]: resource utilization, workload performance, and energy consumption. They used a bin packing technique, and the bins are described as the devices with the servers in this model. The dispatches and allocations are defined through the bin-packaging technique. The consumed energy can be reduced by reducing the average completion time of the cluster in cloud computing cluster as proposed in [27]. The power consumption model and loss comparison rules are proposed to use the heterogeneity of virtual machines in the cluster to compare the difference between the best time to complete tasks and the second-best time for tasks on the same virtual machine. In [28], the task deadline-aware energy-efficient scheduling model for a virtualized cloud is introduced. The objective of energy efficiency is accomplished by implementing the full workload in the host operating state and saving the maximum energy in the idle host. Scheduling model based on guaranteed ratio efficiency parameters, total energy usage, task energy consumption, and resource utilization.
In [29], the author addressed the importance of cloud computing energy management strategies by identifying the key important elements that should be considered when designing an energy management strategy. The most important performance indicators that can be used to assess an energy management plan were also discussed. Also, review several current techniques for energy management that have been categorized based on the proposed energy management model. Authors in [30] integrated energy-efficient cloud computing hardware and software techniques. To achieve energy efficiency, the authors use the techniques of adjusting the number of VMs, the number of cores, and Processor frequency scaling. A method of Energy-aware Task Consolidation (ETC) that reduces the consumption of energy was proposed in [31]. ETC does this by limiting the usage of the CPU below a specified peak threshold and by consolidating tasks among virtual clusters. Additionally, when a task migrates to another virtual cluster, the energy cost model considers network latency. A new meta-heuristic, called HHO, was improved by applying the elite opposition-based learning to enhance the quality of the exploration phase of the conventional HHO algorithm during the searching process to balance between the exploitation and the exploration in the HHO algorithm [32]. The used performance metrics are the convergence rate and the schedule length. In [10], the authors introduce a comparative analysis and comprehensive survey of various task scheduling algorithms for cloud and grid environments considering three popular meta-heuristic techniques: GA, PSO, and ACO as well the BAT algorithm and the League Championship Algorithm (LCA). In [33], a hybrid scheduling algorithm is developed based on a combination of Firefly Algorithm (FA) and Imperialist Competitive Algorithm (ICA) to improve both the makespan and load balancing. In [34], a multi-objective nested PSO has been proposed for task scheduling problem in cloud computing to optimize overall computation time and energy consumption without considering other metrics like cost, balance degree, etc. In [35], the ACO used for task scheduling in cloud computing, the authors proposed a new improvement in the ACO algorithm for achieving tasks scheduling with load balancing through task scheduling depending on the result in the past task scheduling. [36] Presents a method hybridized the multi-verse optimizer with genetic algorithm (GA) to minimize the transfer time of the tasks through scheduling the tasks for all available resources. The proposed approach improves the conventional MVO by applying the crossover and mutation processes to enhance the initiated tasks schedule by MVO. [37] a task scheduling technique based on gradient-based optimization to reduce the makespan performance of a system for the given computing resources. Firstly, it maps the task scheduling scheme to the vector model, and then applies the GBO algorithm to determine the near optimal solution, to improve the makespan performance.
The cloud environment has two main entities: cloud providers and cloud consumers. In short, most research on task scheduling have focused on a single entity, cloud consumers or cloud providers. Further most of them consider a single objective in the scheduling process, but in essence, this problem is usually multi-objective. In a cloud environment, every cloud computing user aspires to have low-cost and high-performance cloud services, while cloud service providers strive to provide cloud services with high resource utilization and low consumed energy to achieve a high revenue and high profit. Therefore, we consider multi-objective task scheduling problem. In addition, most researchers formulate the problem objective without considering application requirements and the availability of cloud resources. Therefore, in our work, the scheduling problem is formulated as an optimization problem including a fitness function represent the scheduling objective subject to a set of constraints that express application requirements and cloud resource availability. Table 1 presents the summary of some existing scheduling techniques in terms of the considered objectives and response time.
Table 1
Task scheduling techniques
Reference
&
Year
Scheduling algorithm
Response time
Schedule length
Execution cost
Consumed energy
Resource utilization
Balance degree
[19], 2019
Three-way clustering into cloud platform, and three-way clustering weight algorithm-TWCW
[20], 2019
Optimized version of the whale optimization algorithm
[21], 2020
Harmony-inspired genetic algorithm
-
[22], 2018
New algorithm called energy-performance trade-off multi-resource cloud task scheduling algorithm
[23], 2021
Non-dominated Sorting Genetic Algorithm (NSGA) based algorithm
[24], 2020
Task-scheduling algorithm based on best–worst and the Technique for Order Preference by Similarity to Ideal Solution technique
[25], 2020
An energy-efficient hybrid (EEH) framework for electrical energy usage in data centers
[26], 2008
Consolidation of applications in cloud computing environments
[27], 2020
The first of maximum loss scheduling algorithm (FOML)
[28], 2018
Task deadline-aware energy-efficient scheduling model
[29], 2018
A novel energy management strategy, based on historical data collected from the concerning host
[30], 2014
Energy management model that combines three management techniques
[31], 2014
An energy-aware task consolidation (ETC) technique that minimizes energy consumption
[32], 2022
Modified Harris hawk’s optimization algorithm based on the elite opposition-based learning method
[33], 2020
Imperialist competitive algorithm (ICA) and firefly algorithm (FA)
[34],2015
Multi-objective nested PSO
[35], 2013
Ant colony optimization (ACO) algorithm
[36], 2022
Multi-verse optimizer and genetic algorithm
[37], 2022
Gradient-based optimization approach
Our proposed algorithm
SMO_ACO algorithm combines Spider Monkey Optimization (SMO) and Ant Colony Optimization (ACO)

3 Problem description and formulation

As mentioned previously, the scheduling problem is the process of assigning users’ tasks/cloudlets to the available VMs in the cloud environment. Cloud infrastructure in cloud data centers is organized as a pool of homogeneous and heterogeneous resources. Each server in a data center can host one or more virtual machines. The virtualization layer provides highly scalable computing resources as virtual resources to clients [38]. Further, there is a data center broker that assigns user tasks to the available resources in the data center, so it may be considered the backbone of the scheduling process. Figure 1 shows the framework of the scheduling process. When a cloud user sends tasks to the cloud, the tasks first enter the task manager component, which manages submitted requests and provides the status of each task to the user who sent the request. The task manager then forwards a job request to the cloud scheduler, which uses the proposed SMO_ACO scheduling algorithm to allocate incoming jobs to available virtual machines. The SMO_ACO takes its scheduling decision according to the task requirements and VM information included in the Cloud Information System (CIS).
This section describes the scheduling problem in terms of a cloud data center scheduling model, task model and the VM models in a cloud data center, as well as a multi-objective model.

3.1 Cloud model

Depending on resource requirements, a cloud system can be sophisticated from a single data center or multiple data centers. To consider the public cloud system model, there is a set of the datacenters (DC) containing P data centers, i.e., \(DC=\left\{d{c}_{1},d{c}_{2},\dots .d{c}_{P}\right\}\). Each data center has multiple physical hosts (PHs), for example, a data center \(d{c}_{r}\) has a set of k physical hosts \(\{p{h}_{r1}, p{h}_{r2},\dots ..p{h}_{rk}\}\). Each host has a specific number of cores to determine the host’s Million Instruction Per Second (MIPS), memory, storage, bandwidth, and Virtual Machine Manager (VMM). The VMM running on the host is primarily responsible for managing and maintaining all VMs on the host.

3.2 Virtual machine model

Each physical host \(ph\) has a set of m virtual machines \(\left\{{vm}_{1}, {vm}_{2},\dots \dots ..{vm}_{m}\right\}\) each \({vm}_{i}\) has specific configurations such as main memory (\({vm}_{mem})\), storage \({vm}_{st}\), processing power \({( vm}_{mips}\)) in MIPS, and the number of cores \(({vm}_{cpus})\) as well as the price of this \(vm\) per hour.

3.3 Task model

In the cloud environment, the cloud consumers submit their independent tasks to the service provider for processing without understanding the complexities of the system infrastructure. These tasks differ in requirements, which are related to the task length and the required resources. Briefly, the users submit n tasks/cloudlets {\(C{l}_{1},C{l}_{2}, \dots \dots . C{l}_{n}\}\) for processing. Each cloudlet has a specific length \({\mathcal{l}(Cl}_{i })\) in Million Instruction (MI). The scheduler first calculates the Execution Time \((ET)\) of each task \({t}_{i}\) on each\(v{m}_{j}\), as:
$$ET\left({Cl}_{i} ,v{m}_{j}\right)=\frac{{\mathcal{l}(Cl}_{i })}{total\_MIPS({vm}_{j})}$$
(1)
where, \(tota\mathrm{l}\_mips(v{m}_{j})=\left({vm}_{cpus}\right)*{vm}_{mips}\). Then, the scheduler calculates the energy consumed by the virtual machine to process all assigned tasks as well as the cost of performing the task in that\(vm\). To minimize the cost of task processing, the scheduler determines which virtual machine has an appropriate execution cost and meets the requirements of each task. Since each task can be processed on any VM and different VMs have the different processing power, this makes different Execution Time \(ET\left({Cl}_{i} ,v{m}_{j}\right)\) and different costs for processing task \({Cl}_{i}\) on different VMs [39]. To minimize execution time, energy consumption, and costs, in addition, to simultaneously maximize the use of resources, a multi-objective scheduling problem arises [40]. Therefore, the purpose of this research work is to solve the multi-objective scheduling problem by using the proposed hybrid SMO_ACO algorithm.

3.4 Multi-objective model

There are two primary types of entities involved in the cloud: service providers and consumers of the cloud. Cloud service providers offer their resources to cloud consumers, and cloud consumers submit their jobs for processing. Consumers care more about application performance, while service providers care more about the efficient use of resources to get more profits. Therefore, the objectives can be divided into two types: consumer desires and provider desires.

3.4.1 Consumer desires objectives

  • Objective 1: schedule length
Schedule length is defined as the maximum completion time of all submitted tasks or the last finishing processing VM. It is considered an essential metric to calculate the quality of the scheduler. A low schedule length value indicates a good scheduling strategy, where the strategy can efficiently assign tasks to the appropriate resources [41]. A high schedule length value denotes a weak scheduling strategy. Equation (2) can compute the value of Schedule Length (SL).
$$SL=\mathrm{max}\left(\sum_{i=1}^{n}ET\left({Cl}_{i} ,{vm}_{j}\right){x}_{ij}\right) \forall VM j$$
(2)
where, \({x}_{ij}\) is a binary decision variable to determine the task is assigned to \({vm}_{j}\) or not as follows:
$${x}_{ij}=\left\{\begin{array}{cc}1& if C{l}_{i} \,is \,allocated \,to {vm}_{j}\\ 0& otherwise\end{array}\right.$$
(3)
Since the objective is to minimize execution time, the first objective \({O}_{1}\) may be defined as:
$${O}_{1}=\mathrm{min}(\mathrm{SL})$$
  • Objective 2: execution cost
The Execution Cost (EC) is the total price for performing a user's application. It tends to be the most measurable metric today, but it is important to express the cost in terms of the given resources. Since the goal of the user is to reduce cost and schedule length. In this arrangement scheme, the execution cost can be computed as follows:
$$EC=\sum_{j=1}^{m }E{T}_{j}*pric{e}_{j}$$
(4)
where, \(E{T}_{j}\) is the execution time of the tasks assigned to the \({j}^{th}\) virtual machine after executing the last task [42]. Therefore, the second objective \({O}_{2}\) could be defined as:
$${O}_{2}=\mathrm{min}\left(\mathrm{EC}\right)$$

3.4.2 Provider desires objectives

  • Objective 3: energy consumption
Energy consumption in data centers includes the consumption of CPU, network interfaces, and storage devices. Compared to other system resources, the CPU consumes more energy. The consumed energy by a VM divides into two parts: consumed energy in its idle and active states. In calculating the total consumed energy, the two states of VM are considered. The consumed energy in the idle state represents approximately 60% of the fully running VM [43] while the consumed energy in the active state depends on the computing speed of the VM as in Eq. (5).
$$A{E}_{i}={10}^{-8}*{\left({{vm}_{i}}_{mips}\right)}^{2} \frac{J}{MI} \mathrm{and }I{E}_{i}=0.6*A{E}_{i}$$
(5)
where, \(A{E}_{i}\) is the energy consumed in the active state and \(I{E}_{i}\) is the energy consumed in idle state [44]. Therefore, the used model energy is defined as in Eq. (6).
$${TE}_{i}=A{E}_{i}+I{E}_{i}$$
(6)
$${TE}_{i}={\varvec{E}}{{\varvec{T}}}_{{\varvec{i}}}*A{E}_{i}+\left(SL-{\varvec{E}}{{\varvec{T}}}_{{\varvec{i}}}\right)*I{E}_{i}$$
(7)
Hence, the third objective \({O}_{3}\) could be defined as:
\({O}_{3}=\mathrm{min}(\sum_{i=1}^{m}T{E}_{i}\)).
  • Objective 4: resource utilization
Resource Utilization (RU) is an essential objective for the cloud service provider. The provider aims to maximize resource utilization to keep resources occupied as much as possible. Since service providers hope to obtain the most profit from a limited number of resources, this metric has become more important. Average resource utilization can be calculated by Eq. (8) [45].
$$average \,RU=\frac{\sum_{i=1}^{m}E{T}_{i}}{{O}_{1}}$$
(8)
where, \({O}_{1}\) represents the minimum SL and \(E{T}_{i}\) is the Execution Time of task \(i\) that assigned to the \({j}^{th}\) virtual machine. Here, high RU means that the available VMs are utilized efficiently in processing the submitted tasks.
The fourth objective \({O}_{4}\) may be defined as:
$${O}_{4}=\mathrm{max}\left(RU\right)$$

3.5 Problem formulation

The scheduling problem may be formulated as an optimization problem consisting of an objective function subject to set of constraints. The fitness/objective function may be modeled to minimize the average of the above mentioned three objectives \({O}_{1}, {O}_{2} and {O}_{3}\) as follows:
$$F=\mathrm{min}\,(\frac{1}{3}\sum_{i=1}^{3}{O}_{i})$$
(9)
The solution with minimal fitness function \(F\) can lead in turn to maximize the above-mentioned objective \({O}_{4}\). Many constraints may be modeled to consider the application requirements and system availability, as follows:
$$\sum_{i=1}^{n}{\mathcal{l}(Cl}_{i }) {x}_{ij} \le Tota{l}_{MIPS}\left({vm}_{{\varvec{j}}}\right) \forall VM j$$
(10)
$$\sum_{i=1}^{n}mem({Cl}_{i }) {x}_{ij} \le \left({{vm}_{j}}_{mem}\right) \forall VM j$$
(11)
$$\sum_{j=1}^{m}{x}_{ij}=1 \forall task {t}_{i}$$
(12)
These constraints are formulated to meet task requirements without wasting cloud resources. The first constraint, Eq. (10), prevents overloading at any virtual machine (\(v{m}_{j}\)) as well as keeping system balance. It ensures that the required load for all tasks assigned to a virtual machine \((v{m}_{j}\)) does not exceed the processing power of that virtual machine. It also ensures that the total number of tasks assigned to a virtual machine at one time must be less than or equal to \({vm}_{cpus}\). The second constraint, Eq. (11), guarantees that the required memory for processing all tasks assigned to a virtual machine does not exceed its available memory. Finally, the third constraint, Eq. (12), assure that each task is allocated to one and only one VM.

4 Proposed hybrid algorithm for scheduling problem

This section presents a new hybrid multi-objective algorithm called SMO_ACO to solve the scheduling problem. The proposed hybrid algorithm combines a modified version of the Spider Monkey Optimization (SMO) algorithm and the Ant Colony Optimization (ACO) algorithm. In the following, both the SMO and the ACO algorithms are first presented then the SMO_ACO introduces in detail.

4.1 Ant colony optimization

The Ant Colony Optimization (ACO) algorithm simulates the cooperative behavior of real ants in food searching. It has been applied to combinatorial optimization problems and has proven an efficient mechanism in solving many problems [46]. The colony reaches the food source by tracing some of them to the colony's nest and this happens by leaving a chemical trail (pheromone) on the ground to determine the best path to food source. The role of this trial is to direct the other ants to the target point. The more value of the pheromone on a given path, the greater probability (\(Porb\)) that ants will choose the same path. For solving the scheduling problem, the main steps of the ACO algorithm may be presented as follows:
i)
Pheromone initialization
 
The value of pheromone \({\delta }_{ij}(t\)) on the path \(ij\) represents the concentration of pheromone in task \(i\) that is mapped to the virtual machine j (\({vm}_{j}\)). The initial value of the pheromone \(\delta (0)\) is set to a small positive constant \({\delta }_{0}\).
ii)
Selection rule
 
At each iteration of the algorithm, the probability function in Eq. (13) applied to select a virtual machine \(v{m}_{j}\) for executing task \({t}_{i}\).
$${Porb }_{ij}^{k}\left(r\right)=\left\{\begin{array}{cc}\frac{{\delta }_{ij}\left(r\right){\vartheta }_{ij}(r)}{\sum_{{z\in A}^{k}}{\tau \delta }_{ix}\left(r\right){\vartheta }_{ix}(r)}& z\epsilon {A}^{k}\\ 0& otherwise\end{array}\right.$$
(13)
where,\({\delta }_{ij}(t)\) is the pheromone value of task \({t}_{i}\) on the virtual machine \({v}_{j}\) and \({A}^{k}\) is the set of available VMs for ant \(k\) which not yet selected by the ant for any task in the iteration \(t\). The heuristic information \({\vartheta }_{ij}\left(r\right)\) that representing the visibility of ant \(k\) in the current iteration is given by:
$${\vartheta }_{ij}\left(r\right)=1\backslash [\mathrm{MI}({Cl}_{i})\backslash MIPS({vm}_{j})]$$
(14)
$$MIPS(v{m}_{j})=pe\_No*mips\_pe$$
(15)
where,\(pe\_No\) is number of processing elements (cores) of the \({vm}_{j}\) and \(mips\_pe\) represents the MIPS of each processor.
iii)
Pheromone updating
 
After the ant ends a complete tour in which it assigns tasks, it updates the local pheromone according to Eq. (16) given below [47]:
$$\Delta \delta_{ij}^{k} \left( r \right) = \left\{ {\begin{array}{*{20}c} {{\raise0.7ex\hbox{$Q$} \!\mathord{\left/ {\vphantom {Q {L_{K} \left( r \right)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${L_{K} \left( r \right)}$}}\quad if\left( {i,j} \right) \in T_{k} \left( r \right) } \\ 0 \quad other wise \\ \end{array} } \right.$$
(16)
where, Q is a control parameter and \({L}_{K}\left(r\right)\) is the tour length that is the schedule length generated in this tour. The \({T}_{k}(r)\) represents the tabooof ant in this tour which has the overall visited \(vm\) in this tour. A global pheromone is updated by Eq. (17) at the end of each iteration:
$${\delta }_{ij}\left(r+1\right)=(1-\rho ){\delta }_{ij}\left(r\right)+\Delta {\delta }_{ij}(r)$$
(17)
where, \(\rho\) is the pheromone evaporation rate, (\(0<\rho <1),\) and \(\Delta {\delta }_{ij}\left(r\right)=\sum_{k=0}^{n}{\Delta \delta }_{ij}^{k}(r)\) and n is the number of ants.

4.2 Spider monkey optimization

The Spider Monkey Optimization (SMO) algorithm is a recent swarm intelligence algorithm, launched in 2014 to solve global optimization problems [48]. The SMO algorithm imitates the foraging behavior of a spider monkey. This behavior indicates that the monkeys belong to the class of animals based on the fission–fusion social structure (FFSS). In the FFSS, the direct foraging competition can be minimized by breaking the main monkey group into subgroups to search for food. Depending on food availability, subgroups members communicate through barking and other physical activities. In this type of society, depending on the environment or social conditions, the parent group can be divided into smaller subgroups (fission process), or the subgroups can be merged into a larger group again (fusion process). These subgroups are led by a Global Leader (GL) or female in the food search, and when food is scarce in the search areas, the global leader decides to divide the main group into subgroups to explore food in different regions. Each subgroup also has a Local Leader (LL) of this subgroup, it is responsible for deciding for scheduling an efficient food search direction every day. The members of these subgroups interact within and outside the subgroup based on the accessibility of the food source, and to maintain regional borders [49].

4.2.1 Main steps of the SMO algorithm

The SMO searching process has seven phases: Global Leader Phase (GLP), Local Leader Phase (LLP), Global Leader Learning (GLL) phase, Local Leader Learning (LLL) phase, Local Leader Decision (LLD) phase, and Global Leader Decision (GLD) phase as well as the initialization phase.
1)
The initialization phase
 
Firstly, the SMO generates a randomly uniform population has number of spider monkeys (A) where each spider monkey SMi (i = {0, 1, 2, 3…. A}) is a D-dimensional vector and SMi represents ith spider monkey in the population and its value may be calculated by:
$$S{M}_{il}=S{M}_{minl}+rand(\mathrm{0,1})*(S{M}_{maxl}-S{M}_{minl})$$
(18)
where, \(S{M}_{minl}\) and \(S{M}_{maxl}\) are minimum and maximum bounds of \(S{M}_{i}\) in \({l}^{th}\) dimension [48]. And \(rand (\mathrm{0,1})\) is a uniformly random number in the range [0, 1].
2)
Local leader phase
 
In the LLP, each SM updates its position depending on the experience of the other group members and the local leader of this group. The new position is evaluated by its fitness value, if the new fitness value is better than the old one, then the SM updates its position with the newly generated position. The new position is changed according to Eq. (19) [48].
$$S{M}_{newil}=\left\{\begin{array}{c}S{M}_{il}+rand\left(\mathrm{0,1}\right)*\left(L{L}_{kl}-S{M}_{il}\right)+rand\left(1,-1\right)*\left(S{M}_{rl}-S{M}_{il}\right)\quad rand\left(\mathrm{0,1}\right)\ge pr\\ S{M}_{il}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad otherwise\end{array}\right.$$
(19)
where \(l\) is the \({l}^{th}\) dimension of ith spider monkey, \(L{L}_{kl}\) is the \({l}^{th}\) dimension of LL position for k group, and \(pr\) is the probability of perturbation in the current position and its value is in range [0.1, 0.9]. While \(S{M}_{rl}\) the \({l}^{th}\) dimension of SM that is randomly chosen form the group (\(i\ne r\)) and \(rand\left(1,-1\right)\) is a uniformly random number in the range [-1, 1]. The LLP implementation steps are given in algorithm 1.
3)
Global leader phase
 
This phase begins after the LLP has been completed. Spider monkeys will update their positions based on the experience of the GL and members of its group considering the spider monkey probability \({p}_{i}\). The new position is updated according to Eq. (20).
$$S{M}_{newil}=\left\{\begin{array}{c}S{M}_{il}+rand\left(\mathrm{0,1}\right)*\left(G{L}_{l}-S{M}_{il}\right)+rand\left(1,-1\right)*\left(S{M}_{rl}-S{M}_{il}\right) rand\left(\mathrm{0,1}\right)\ge {p}_{i}\\ S{M}_{il}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad otherwise\end{array}\right.$$
(20)
where \(G{L}_{l}\) is the \({l}^{th}\) dimension of the GL position and its value is a randomly chosen from {0,1, 2…. D]. The probability \({p}_{i}\) of spider monkey \(S{M}_{i}\) may be calculated depending on the spider monkey fitness value and is given by Eq. (21).
$${p}_{i}=0.9* \frac{f\left(S{M}_{i}\right)}{\mathrm{max}\_fit}+0.1$$
(21)
where \(f\left(S{M}_{i}\right)\) is the fitness value of \(S{M}_{i}\) and \(\mathrm{max}\_fit\) is the maximum fitness value. Algorithm 2 shows the main steps of GLP.
4)
Global leader learning phase
 
In this phase, the GL position is updated by applying the greedy selection procedure, where the SM with the best fitness among all monkeys in the population is selected to be the new GL, and if the global leader's position remains the same \(, Globallimit\_count\) parameter is incremented by 1 [48].
5)
Local leader learning phase
 
As in the GLL phase, this phase is concerned with updating the position of the LL of each group. Where the best SM in the group becomes the LL of that group then comparing the newly updated position with the old LL of the group, if they are the same the \(Locallimit\_count\) is incremented by 1[48].
6)
Local leader decision phase
 
The group members change their locations based on the LL update times. If the local leader is not changed up to \(calleader\_Limit\), then all members in that group change their locations either by randomly generated positions according to Eq. (18) or based on the GL and LL locations as in Eq. (22) based on the probability (\(pr)\) value. Algorithm 3 shows the implementation steps of the LLD phase.
$${SM}_{newil}=S{M}_{il}+rand\left(\mathrm{0,1}\right)*\left(G{L}_{l}-S{M}_{il}\right)+rand\left(\mathrm{0,1}\right)*\left(S{M}_{il}-L{L}_{kl}\right)$$
(22)
7)
Global leader decision phase
 
The GL decides to divide the population into smaller subgroups based on the number of global leaser position updates in this phase. If the GL does not update its position up to a predetermined number called, \(Globalleader\_Limit\) then it decides to divide the population into smaller subgroups. The number of subgroups starts with two groups then three and so on to reach the maximum number of groups (\(\mathrm{max}\_GN)\). After creating the new subgroups, the LLL phase runs to determine the new LL of the newly created groups and in the case of reaching the maximum number of groups and the GL position is not updated, the GL fuses all subgroups into one group again. The implementation steps of this phase are given in algorithm 4 while algorithm 5 illustrates the SMO algorithm implementation steps.

4.3 Proposed SMO_ACO algorithm

In this work, a new hybrid approach combining both the SMO and the ACO is developed. The newly proposed approach is called SMO_ACO. However, in the SMO_ACO approach, improvements were proposed in the LLD phase of the basic SMO to enhance the performance of the conventional SMO algorithm. In the LLD phase, the position of each SM in the original SMO algorithm is changed depending on the position of another randomly selected SM. This update is done regardless of whether a randomly chosen monkey site is better. This results in a low convergence rate that further encourages the breaking and merging of groups. The SMO_ACO algorithm enhances the LLD as in Eq. (23), considering the distance between the GL position and the LL position of the spider group, also the distance between the monkey and its local leader. This update allows the \(SM\) to gain a better experience from the GL as in basic SMO as well as the experience of LL in the group and will converge to a better position.
$$S{M}_{newil}=\left\{\begin{array}{c}rand\left(\mathrm{0,1}\right)*\left({\mathrm{GL}}_{l}-{\mathrm{LL}}_{\mathrm{k}l}\right)+rand\left(\mathrm{0,1}\right)*\left(S{M}_{il}-{\mathrm{LL}}_{\mathrm{k}l}\right)\quad\quad\quad\quad\quad porb\ge {p}_{i}\\ S{M}_{il}+rand\left(\mathrm{0,1}\right)*\left(G{L}_{l}-S{M}_{il}\right)+rand\left(\mathrm{0,1}\right)*\left(S{M}_{il}-{LL}_{kl}\right)\quad otherwise\end{array}\right.$$
(23)
$$\mathrm{porb}=\mathrm{pr}*\left({\mathrm{p}}_{\mathrm{max}}-{\mathrm{p}}_{\mathrm{min}}\right)-{p}_{min}$$
where, the \({\mathrm{p}}_{\mathrm{max}} and {\mathrm{p}}_{\mathrm{min}}\) are the maximum and minimum probability values in the current population, respectively. Algorithm 6 shows the new steps of the modified LLD phase.
The input for the algorithm is a list of user’s submitted tasks and a list of available VMs, then the algorithm starts by generating a random population SM with the dimension A × n where A is the population size and n is the number of submitted tasks. Each solution \({SM}_{i}\) ∈ SM represents a candidate schedule solution, and it is assessed by the value of its final objective value according to Eq. (5). The output is the scheduler decision which can be represented with a 1 × n vector where the index is the task number and its value between 0 and n-1, while the vector value represents the number of a virtual machine that runs this task and takes the value between 0 and m-1.
The hybrid SMO_ACO algorithm starts with identifying the GL and LL for each group and then calculating the probability for each \(SM\) in the population to determine the\({\mathrm{p}}_{\mathrm{max}} and {\mathrm{p}}_{\mathrm{min}}\). The search process may be occurred by applying the SMO search process or by applying the ACO algorithm, based on the probability of the current GL (\({p}_{GL}\)). The randomly generated probability \(({p}_{rand})\) in the range [\({\mathrm{p}}_{\mathrm{max}} , {\mathrm{ p}}_{\mathrm{min}}\)] is compared with the GL probability. If the \({p}_{GL}\) value is greater than the \({p}_{rand}\) value, then start the SMO algorithm otherwise apply the ACO algorithm with the current GL position as an initial solution. This improves the exploitation phase of the SMO around the GL position, accelerates the convergence rate, and avoids the local optimum points. The remaining phases of the SMO_ACO would be the same as the conventional SMO, and the complete framework of the proposed SMO_ACO algorithm is given in algorithm 7.

5 Performance evaluation

This section presents the experimental results of scheduling different tasks into a different number of VMs. The results of the proposed multi-objective SMO_ACO are compared with those obtained by the conventional SMO, Genetic Algorithm (GA) [50], Water Wave Optimization (WWO) [51], Harris Hawks Optimizer (HHO) [52], and Ant Colony Optimization (ACO) [53] algorithms in terms of SL, execution cost, consumption energy, balancing degree, throughput and resource utilization.

5.1 Experimental environment and datasets

The simulation is conducted using a core i5-2540 M CPU @ 2.60GHZ laptop with 8 GB RAM and a 64-bit Windows 10 operating system. The well-known CloudSim 3.0.3 tool kit is used to simulate the cloud environment [54]. The configurations of the data center and host are given in Table 2 while the characteristics of VMs are given in Table 3 with different Azure Bs-series Virtual Machines prices given in Table 4.
Table 2
Data center and host configuration
Cloud entity
characteristic
value
Data Center
No. of data centers
No. of hosts
No. of users
2
2
1
Host
Storage
RAM
BW
Shared policy
1 TB
2 GB
10 GB
Space shared policy
Table 3
VM characteristics
Characteristic
Value
No. of VMs
40,80,120
MIPS
500 to 1500
BW
0.5 Gb/S
VMM
Xen
Size
100 MB
Table 4
Price of VM instance
Memory (GB)
Storage (GB)
Cores
Price ($/Hour)
8
16
2
0.0832
16
32
4
0.166
32
64
8
0.333
48
96
12
0.499
64
128
16
0.666
80
160
20
0.832
The synthetic workload and normal workload traces are used for experiments to evaluate the efficacy of the presented SMO_ACO algorithm. A uniform distribution is used to produce the synthetic workload, displaying an equal number of small, medium, and large-sized tasks. For performance measurement, HPC2N (North High-Performance Computing Centre) is used. HPC2N set log is among the most popular and widely used standards for evaluating performance in distributed systems. The used synthetic workload is summarized in Table 5. The Parameters of the implemented algorithms are given in Table 6. For comparison between these algorithms, each experiment is run ten times and then the averages of the results are considered.
Table 5
Task properties
Task Id
Task number
Task length (MI)
Properties
1: n
250: 2000
10,000: 80,000
Independent
Table 6
Algorithms parameters
Algorithm
Parameters
value
SMO &
SMO_ACO
GLL
LLL
\(\mathrm{max}\_GN\)
pr
Max_iter
Population size
40
50
5
[0.1 0.4]
100
50
HHO
Swarm size
Max_iter
\(\beta\)
50
100
1.5
WWO
No. of waves
Max_iter
Initial wavelength
Max. wave height
50
100
0.5
6
ACO
No. of ants
Max_iter
Q
\(\rho\)
Initial pheromone
15
100
50
0.4
0.03
GA
No. of generations
Max_iter
50
100

5.2 Results and discussion

This section discusses the performance of the proposed SMO_ACO algorithm with comparative study with the most recent algorithms.

5.2.1 Experimental results in terms of SL

In this study, various data sets of tasks (HPC2N workload and syntactic workload) are used to verify the effectiveness of the developed algorithm. Initially, the results of assigning five HPC2N workloads within the range of 250 to 2000 jobs are given in Table 7, with a number of virtual machines of 40 and 80 VMS. Then the proposed algorithm is evaluated using a synthetic dataset with 120 VMs. The simulation results of the developed algorithm were compared with other algorithms (ACO, WWO, HHO, and GA). The results shown in Table 7 proves that the developed SMO_ACO algorithm assigns tasks to the proper resource according to the fitness value because of better exploration, exploitation, and time improvement.
Table 7
mean and maximum values of schedule length (in sec) for the SMO_ACO and other algorithms.
No. of VMs
Task No
250
500
1000
1500
2000
Method
mean
Max
mean
max
Mean
Max
mean
Max
mean
max
 
SMO_ACO
367.53
387.46
713.77
734.6
859.33
877.17
997.37
1015.29
1438.46
1439.66
 
SMO
1711.62
2205.98
3975.73
4617.05
4941.51
5799.8
5716.45
6444.33
8967.59
9467.53
40
HHO
4144.41
8987.2
4370.72
9412.22
9777.98
31273.96
8317.65
20724.35
13164.54
32297.43
 
WWO
2448.84
3867.25
5161.73
6925.92
6440.99
7647.61
7255.13
8203.59
10415.96
11272.7
 
GA
2908.63
3705.93
5094.11
7113.50
6503.32
7033.64
7682.66
7913.76
10526.19
11386.6
 
ACO
529.85
725.77
732.28
749.67
893.36
960.36
1042.03
1055.79
1446.25
1451.62
 
SMO_ACO
268.9
322.19
468.4
502.3
495.76
502.44
639.72
641.6
775.04
776.05
 
SMO
1373.4
1659
2872.9
3365.45
3281.2
3641.32
3699.9
3971.4
5741.7
6245.8
80
HHO
3412.2
11054.5
4688.7
10723.8
5337.5
7543.8
7142.9
9779.96
10702.9
15311.6
 
WWO
2170.4
2819
3573.8
4179.36
4426.14
4968.39
5432.68
6602.5
6980.04
7656.7
 
GA
2247.1
2681.7
3153.3
3716.7
3841
4086.9
4315.43
4328.6
6160.13
6229
 
ACO
415.02
534.18
544.7
689.34
536.34
544.33
643.48
644
781.54
783.2
From Table 7 and Figs. 2 and 3, the schedule length increases with the number of tasks as expected. Also, SMO_ACO outperforms all other algorithms in reducing makespan in both types of workloads. Average minimization by SMO_ACO was in the range of 1% to 30% less than ACO for different task situations. Average minimization by SMO_ACO was up to 50% less than SMO for different task situations. The results in Table 7 illustrate that the developed SMO_ACO algorithm assigns different tasks to the resource in the case of fixed VMs numbers. The SMO_ACO can assign tasks based on the values of different objectives as discussed in Sect. 3 and the proposed SMO_ACO also satisfies a good balance between the exploration and exploitation search phases. Therefore, SMO_ACO can minimize the SL and satisfies the first objective. It can also be seen that the difference between the maximum value and the average value of SL is small, and this indicates that the used method can reach a point close to the optimum value every running time and the deviation between the results is small.

5.2.2 Experimental results in terms of EC

Figures 4 and 5 show execution cost results for the SMO_ACO, SMO, HHO, WWO, GA, and ACO algorithms at synthetic and HPC2N workloads. Depending on the type of virtual machine used to execute the task and the time required to complete the task, the cost of executing the task may vary from one VM to another and is calculated using Eq. (4). From Figs. 4 and 5, it can be concluded that with a different number of tasks, SMO_ACO outperforms the basic SMO algorithm in both workloads. For 250 to 2000 real tasks, the SMO_ACO's average execution cost minimization is 9% to 41% less than that of the SMO, with 80 VMs, respectively. For 500 to 2000 synthetic tasks the average minimization is 7% to 57% less than that of SMO except in case 0f 250 tasks the execution cost of SMO_ACO would be 11.459 $ while the SMO cost would be 10.6585 $ as shown in Fig. 5. Additionally, SMO_ACO outperforms all algorithms in terms of reducing implementation costs in workloads for most experimental cases. Therefore, SMO_ACO can minimize the EC and satisfies the second objective in most cases as shown by the mentioned figures.

5.2.3 Experimental results in terms of TE

Energy consumption objective values with different experiments are shown in Figs. 6 and 7. One reason for measuring this metric is to ensure reduced scheduling length and cost. There is a direct relationship between scheduling length, cost, and energy. As shown in Figs. 6 and 7, when the task number increases, the power consumption of SMO, HHO, WWO, and GA increases rapidly, while in our algorithm and ACO, energy increases very slowly. The number of tasks changes over time. In contrast to the other algorithms that do not use system load to allocate resources, the proposed method can do so. Here, it can be concluded that the SMO_ACO is superior to other algorithms because it can assign tasks with higher lengths and costs to higher-capacity virtual machines.
From Fig. 6 (a), SMO_ACO appears to consume 28.89 kJ with 2000 tasks, which is the least compared to other algorithms that are frequently performed. For 250 to 2000 real tasks, the SMO_ACO's average consumed energy minimization is 74% to 79% less than that of the SMO, with 80 VMs and it is 70% to 75% with 40 VMs, respectively.

5.2.4 Experimental results in terms of RU

Figures 8 and 9 show the results of resource utilization in random and HPC2N workloads for SMO, SMO_ACO, HHO, WWO, GA, and ACO algorithms. Assume that the number of VMs is 40 or 80 VMs and the task number varies from 250 to 2000 jobs. It can be observed from Fig. 8a, b and Fig. 9 that SMO_ACO uses VMs better than SMO, HHO, WWO, GA, and ACO because SMO_ACO considers resource usage when scheduling tasks to appropriate virtual machines. Hence, SMO_ACO leads to improvement the Performance in terms of maximizing resource use.

5.2.5 Experimental results in terms of BD

Balance Degree (BD) is the degree of balancing the workload on the available VMs after applying the scheduling process. Higher BD refers to a more efficient scheduling algorithm. Equation (24) may calculate the BD:
$$BD=S{L}_{opt}/S{L}_{fin}$$
(24)
where, \(S{L}_{fin}\) is the final obtained SL after applying the scheduling algorithm while \(S{L}_{opt}\) is the optimal schedule length. \(S{L}_{opt}={M{I}_{t}/MIPS}_{t}\), where \(M{I}_{t}\) is the total of MI for all submitted tasks and \(MIP{S}_{t}\) is the sum of all available MIPS.
Figures 10 and 11 show BD, for scheduling different cloudlets for different numbers of virtual machines. From Figs. 10a, b and 11 the proposed SMO_ACO has the highest BD among all test cases because it has the smallest SL in all the experimental cases. The results show that the proposed solution will be assigned user tasks to the available virtual machines with a higher BD ratio, thus preventing any virtual machines from being overloaded at any time.

5.2.6 Experimental results in terms of convergence rate

The efficiency of the meta-heuristic algorithm (SMO, ACO, WWO, HHO, etc.) depends on the swarm size and the maximum iteration number. If the number of iterations is high enough, the meta-heuristic algorithm will provide better results; Otherwise, the results may be generated at a near-optimal point. Ten test cases are considered, and the iteration number ranges from 100 to 1000. Initially, the number of iterations was small (100 iterations), so the estimated regeneration time was large. Table 8 shows that in the developed algorithm, the SL decreases with an increasing number of iterations. This demonstrates that, compared to other comparison algorithms, the proposed SMO_ACO can generate better-quality solutions and converge faster. The SMO_ACO can reach 1236.22 s of SL through 100 iterations, while ACO can reach this value with 900 iterations.
Table 8
Convergence rate with real workload (500 tasks) and 20 VMs
Iterations no
SMO_ACO
SMO
HHO
WWO
GA
ACO
100
1236.2186
7265.17
10,370.787
7873.298
7628.362
1291.22
200
1233.689
7200.133
9971.216
7776.66
7561.044
1253.319
300
1232.819
6604.728
9001.2
7604.473
7216.886
1250.92
400
1232.7264
6219.6
7909.74
7446.57
6412.054
1249.6
500
1231.55
5767.456
6106.836
7343.24
5363.338
1245.537
600
1230.524
5182.615
5682.996
7266.1323
4432.5
1244.22
700
1229.9917
4788.296
5129.892
7242.842
3152.81
1242.565
800
1229
4728.992
5042.24
7228.354
2999.06
1238.595
900
1227.554
4449.142
4691.32
7216.045
2976.14
1236.624
1000
1227.45
4295.484
4378.599
7147.1033
2830.66
1235.332

5.2.7 Experimental results summary

The results showed that the presented SMO_ACO effectively assigns the submitted tasks to the available VMs. The average minimization of schedule length by SMO_ACO was in the range of 1% to 30% less than ACO for different task situations and up to 50% less than SMO for different task situations. Also, the average execution cost by the SMO_ACO is up to 55% less than that of the SMO. As well as the total consumed energy minimization reaches 79% in some cases with a high balance degree of up to 85% in most experiments.

5.3 Statistical validation

The proposed SMO_ACO is evaluated by performing ten independent runs and the mean value of the objective determines the scheduler decision. Additionally, statistical tests of the ANOVA table, Wilcoxon rank test, and Friedman test are conducted in 40 VMs and 500 tasks case. The fetched schedule length results of the ANOVA test are tabulated in Table 9 and the p-value obtained in this test is 1.0075e-18, which means there is a significant difference in the column means. While Table 10 fetched the consumed energy results with the p-value equal to 3.21e-18. Moreover, the Wilcoxon rank and Friedman tests are conducted, and their fetched schedule length results are tabulated in Table 11. The schedule length values obtained via the presented SMO_ACO and the others through a set of separate trials based on the Friedman rank test are given in Fig. 12. Moreover, regarding the Friedman rank test, the proposed SMO_ACO achieved the first rank with a mean of 713.7688, on the other hand, the WWO comes in the last rank with a mean value of 5161.7. The statistical tests proved the competence and preference of the proposed SMO_ACO rather than the other considered approaches.
Table 9
ANOVA Table for SL results
Source
SS
Df
MS
F
Prob > F
Columns
2.15594 e08
5
431,188
48.26
1.0075e-18
Error
4.82435 e07
54
893,397.8
  
Total
2.63838e08
59
   
Table 10
ANOVA Table for energy results
Source
SS
Df
MS
F
Prob > F
Columns
30,234.56
5
6046.9
45.75
3.21e-18
Error
47,137.325
54
132.173
  
Total
37,371.89
59
   
Table 11
Wilcoxon sign-rank and Friedman tests
 
SMO_ACO
SMO
HHO
WWO
GA
ACO
R + 
55
55
55
55
71
R-
− 3.7
− 3.741
− 3.742
− 3.742
− 2.5324
p-value Wilcoxon
1.8267e-04
1.83e-04
1.8267e-04
1.8267e-04
0.0113
Null hypothesis indicator (H)
Accept
Accept
Accept
Accept
Accept
Friedman mean rank
713.7688
3.9757e03
4.3707e03
5.1617e03
5.0941e03
732.28
Final order
1
3
4
6
5
2

6 Conclusion

This paper presented a new multi-objective scheduling technique, called SMO_ACO. The proposed algorithm hybrids the SMO and the ACO algorithms to optimize the convergence rate of the best solution with satisfying the multi-objectives of scheduling. The algorithm is designed to simultaneously optimize four eminent objectives including reducing energy consumption, makespan, and implementation cost, with maximization of resource utilization. The proposed SMO_ACO strategy is assessed on various workloads in the CloudSim toolkit. The performance of the proposed strategy is compared to the popular scheduling algorithms such as ACO, GA, WWO, and HHO using different performance metrics. Experimental results prove that the SMO_ACO algorithm efficiently assigns resources, achieving more stable and acceptable schedules for a user application task. For future work, it will be powerful to evaluate and analyze the proposed algorithm in the private and hybrid cloud considering the workflow data set. Further, it is important to consider the precedence relation of the application tasks.

Declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
None.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Puthal, D., Sahoo, B.P.S., Mishra, S., Swain, S.: Cloud Computing Features, Issues, and Challenges: A Big Picture in 2015 International Conference on Computational Intelligence and Networks, 2015, pp. 116–123. Puthal, D., Sahoo, B.P.S., Mishra, S., Swain, S.: Cloud Computing Features, Issues, and Challenges: A Big Picture in 2015 International Conference on Computational Intelligence and Networks, 2015, pp. 116–123.
2.
Zurück zum Zitat Bardsiri, A.K., Hashemi, S.M.: QoS Metrics for Cloud Computing Services Evaluation. Int. J. Intell. Syst. Appl. 6(12), 27–33 (2014) Bardsiri, A.K., Hashemi, S.M.: QoS Metrics for Cloud Computing Services Evaluation. Int. J. Intell. Syst. Appl. 6(12), 27–33 (2014)
3.
Zurück zum Zitat Dillon, T., Wu, C., Chang, E.: Cloud computing: Issues and challenges. Proc. - Int. Conf. Adv. Inf. Netw. Appl. AINA, pp. 27–33 (2010) Dillon, T., Wu, C., Chang, E.: Cloud computing: Issues and challenges. Proc. - Int. Conf. Adv. Inf. Netw. Appl. AINA, pp. 27–33 (2010)
4.
Zurück zum Zitat Bittencourt, L.F., Goldman, A., Madeira, E.R.M., Da Fonseca, N.L.S., Sakellariou, R.: Scheduling in distributed systems: A cloud computing perspective. Comput. Sci. Rev. 30, 31–54 (2018) Bittencourt, L.F., Goldman, A., Madeira, E.R.M., Da Fonseca, N.L.S., Sakellariou, R.: Scheduling in distributed systems: A cloud computing perspective. Comput. Sci. Rev. 30, 31–54 (2018)
5.
Zurück zum Zitat Alkhanak, R.M.P., Nabiel, E., Lee, S.P., Rezaei, R.: ‘Cost optimization approaches for scientific workflow scheduling in cloud and grid computing: A review, classifications, and open issues.’ J. Syst. Softw. 113, 1–26 (2016) Alkhanak, R.M.P., Nabiel, E., Lee, S.P., Rezaei, R.: ‘Cost optimization approaches for scientific workflow scheduling in cloud and grid computing: A review, classifications, and open issues.’ J. Syst. Softw. 113, 1–26 (2016)
6.
Zurück zum Zitat Hamid, S., Madni, H., Shafie, M., Latiff, A., Abdullahi, M., Abdulhamid, M., Usman, M.J.: Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment. PLoS One 12(5), e0176321 (2017) Hamid, S., Madni, H., Shafie, M., Latiff, A., Abdullahi, M., Abdulhamid, M., Usman, M.J.: Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment. PLoS One 12(5), e0176321 (2017)
7.
Zurück zum Zitat Valli, K.L.D.S.: Multi - objective heuristics algorithm for dynamic resource scheduling in the cloud computing environment. J. Supercomput. 77, 8252–8280 (2021) Valli, K.L.D.S.: Multi - objective heuristics algorithm for dynamic resource scheduling in the cloud computing environment. J. Supercomput. 77, 8252–8280 (2021)
8.
Zurück zum Zitat Abdullahi, S.M.M., Ngadi, M.A., Dishing, S.I., Abdulhamid, B.I.A.: An efficient symbiotic organisms search algorithm with chaotic optimization strategy for multi-objective task scheduling problems in cloud computing environment. J. Netw. Comput. Appl. 133, 60–74 (2019) Abdullahi, S.M.M., Ngadi, M.A., Dishing, S.I., Abdulhamid, B.I.A.: An efficient symbiotic organisms search algorithm with chaotic optimization strategy for multi-objective task scheduling problems in cloud computing environment. J. Netw. Comput. Appl. 133, 60–74 (2019)
9.
Zurück zum Zitat Torabi, S., Safi-Esfahani, F.: A dynamic task scheduling framework based on chicken swarm and improved raven roosting optimization methods in cloud computing. J. Supercomput. 74(6), 2581–2626 (2018) Torabi, S., Safi-Esfahani, F.: A dynamic task scheduling framework based on chicken swarm and improved raven roosting optimization methods in cloud computing. J. Supercomput. 74(6), 2581–2626 (2018)
10.
Zurück zum Zitat Kalra, M., Singh, S.: A review of metaheuristic scheduling techniques in cloud computing. Egypt. Informatics J. 16(3), 275–295 (2015) Kalra, M., Singh, S.: A review of metaheuristic scheduling techniques in cloud computing. Egypt. Informatics J. 16(3), 275–295 (2015)
12.
Zurück zum Zitat Awad, A.I., El-Hefnawy, N.A., Abdel-Kader, H.M.: Enhanced particle swarm optimization for task scheduling in cloud computing environments. Procedia Comput. Sci. 65, 920–929 (2015) Awad, A.I., El-Hefnawy, N.A., Abdel-Kader, H.M.: Enhanced particle swarm optimization for task scheduling in cloud computing environments. Procedia Comput. Sci. 65, 920–929 (2015)
13.
Zurück zum Zitat Li, K., Xu, G., Zhao, G., Dong, Y., Wang D.: Cloud task scheduling based on load balancing ant colony optimization. Proc. - 2011 6th Annu. ChinaGrid Conf. ChinaGrid, pp. 3–9 (2011) Li, K., Xu, G., Zhao, G., Dong, Y., Wang D.: Cloud task scheduling based on load balancing ant colony optimization. Proc. - 2011 6th Annu. ChinaGrid Conf. ChinaGrid, pp. 3–9 (2011)
14.
Zurück zum Zitat Ranjani Rani, R., Ramyachitra, D.: Microarray cancer gene feature selection using spider monkey optimization algorithm and cancer classification using SVM. Procedia Comput. Sci. 143, 108–116 (2018) Ranjani Rani, R., Ramyachitra, D.: Microarray cancer gene feature selection using spider monkey optimization algorithm and cancer classification using SVM. Procedia Comput. Sci. 143, 108–116 (2018)
15.
Zurück zum Zitat Akhand, M.A.H., Ayon, S.I., Shahriyar, S.A., Siddique, N., Adeli, H.: discrete spider monkey optimization for travelling salesman problem. Appl. Soft Comput. J. 86, 105887 (2020) Akhand, M.A.H., Ayon, S.I., Shahriyar, S.A., Siddique, N., Adeli, H.: discrete spider monkey optimization for travelling salesman problem. Appl. Soft Comput. J. 86, 105887 (2020)
16.
Zurück zum Zitat Singh, U., Salgotra, R., Rattan, M.: A novel binary spider monkey optimization algorithm for thinning of concentric circular antenna arrays. IETE J. Res. 62(6), 736–744 (2016) Singh, U., Salgotra, R., Rattan, M.: A novel binary spider monkey optimization algorithm for thinning of concentric circular antenna arrays. IETE J. Res. 62(6), 736–744 (2016)
17.
Zurück zum Zitat Tabasi, M., Asgharian, P.: Optimal operation of energy storage units in distributed system using social spider optimization algorithm. Int. J. Electr. Eng. Informatics 11(3), 564–579 (2019) Tabasi, M., Asgharian, P.: Optimal operation of energy storage units in distributed system using social spider optimization algorithm. Int. J. Electr. Eng. Informatics 11(3), 564–579 (2019)
18.
Zurück zum Zitat Singh, P.R., Elaziz, M.A., Xiong, S.: Modified spider monkey optimization based on nelder-mead method for global optimization. Expert Syst. Appl. 110, 264–289 (2018) Singh, P.R., Elaziz, M.A., Xiong, S.: Modified spider monkey optimization based on nelder-mead method for global optimization. Expert Syst. Appl. 110, 264–289 (2018)
19.
Zurück zum Zitat Jiang, C., Duan, Y., Yao, J.: Resource-utilization-aware task scheduling in cloud platform using three-way clustering. J. Intell. Fuzzy Syst. 37(4), 5297–5305 (2019) Jiang, C., Duan, Y., Yao, J.: Resource-utilization-aware task scheduling in cloud platform using three-way clustering. J. Intell. Fuzzy Syst. 37(4), 5297–5305 (2019)
20.
Zurück zum Zitat Hemasian-Etefagh, F., Safi-Esfahani, F.: Dynamic scheduling applying new population grouping of whales meta-heuristic in cloud computing. J. Supercomput. 75(10), 6386–6450 (2019) Hemasian-Etefagh, F., Safi-Esfahani, F.: Dynamic scheduling applying new population grouping of whales meta-heuristic in cloud computing. J. Supercomput. 75(10), 6386–6450 (2019)
21.
Zurück zum Zitat Sharma, M., Garg, R.: HIGA: Harmony-inspired genetic algorithm for rack-aware energy-efficient task scheduling in cloud data centers. Eng. Sci. Technol. an Int. J. 23(1), 211–224 (2019) Sharma, M., Garg, R.: HIGA: Harmony-inspired genetic algorithm for rack-aware energy-efficient task scheduling in cloud data centers. Eng. Sci. Technol. an Int. J. 23(1), 211–224 (2019)
22.
Zurück zum Zitat Li Mao, W.L., Li, Yin, Peng, Gaofeng, Xiyao, Xu.: A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds. Sustain. Comput. Informatics Syst. 19, 233–241 (2018) Li Mao, W.L., Li, Yin, Peng, Gaofeng, Xiyao, Xu.: A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds. Sustain. Comput. Informatics Syst. 19, 233–241 (2018)
24.
Zurück zum Zitat Khorsand, R., Ramezanpour, M.: An energy-efficient task-scheduling algorithm based on a multi-criteria decision-making method in cloud computing. Int. J. Commun. Syst. 33(9), 1–17 (2020) Khorsand, R., Ramezanpour, M.: An energy-efficient task-scheduling algorithm based on a multi-criteria decision-making method in cloud computing. Int. J. Commun. Syst. 33(9), 1–17 (2020)
25.
Zurück zum Zitat Alarifi, A., Dubey, K., Amoon, M., Altameem, T., El-Samie, F.E.A., Altameem, A., Sharma, S.C., Nasr, A.A.: energy-efficient hybrid framework for green cloud computing. IEEE Access 8(June), 115356–115369 (2020) Alarifi, A., Dubey, K., Amoon, M., Altameem, T., El-Samie, F.E.A., Altameem, A., Sharma, S.C., Nasr, A.A.: energy-efficient hybrid framework for green cloud computing. IEEE Access 8(June), 115356–115369 (2020)
26.
Zurück zum Zitat Srikantaiah, S., Kansal, A., Zhao, F.: Energy aware consolidation for cloud computing. Work. Power Aware Comput. Syst. HotPower 2008, no. November 2008, (2008) Srikantaiah, S., Kansal, A., Zhao, F.: Energy aware consolidation for cloud computing. Work. Power Aware Comput. Syst. HotPower 2008, no. November 2008, (2008)
27.
Zurück zum Zitat Liang, B., Dong, X., Wang, Y., Zhang, X.: A low-power task scheduling algorithm for heterogeneous cloud computing. J. Supercomput. 76(9), 7290–7314 (2020) Liang, B., Dong, X., Wang, Y., Zhang, X.: A low-power task scheduling algorithm for heterogeneous cloud computing. J. Supercomput. 76(9), 7290–7314 (2020)
28.
Zurück zum Zitat Garg, N., Goraya, M.S.: Task deadline-aware energy-efficient scheduling model for a virtualized cloud. Arab. J. Sci. Eng. 43(2), 829–841 (2018) Garg, N., Goraya, M.S.: Task deadline-aware energy-efficient scheduling model for a virtualized cloud. Arab. J. Sci. Eng. 43(2), 829–841 (2018)
29.
Zurück zum Zitat Chaabouni, T., Khemakhem, M.: Energy management strategy in cloud computing: a perspective study. J. Supercomput. 74(12), 6569–6597 (2018) Chaabouni, T., Khemakhem, M.: Energy management strategy in cloud computing: a perspective study. J. Supercomput. 74(12), 6569–6597 (2018)
30.
Zurück zum Zitat Tesfatsion, S.K., Wadbro, E., Tordsson, J.: A combined frequency scaling and application elasticity approach for energy-efficient cloud computing. Sustain. Comput. Informatics Syst. 4(4), 205–214 (2014) Tesfatsion, S.K., Wadbro, E., Tordsson, J.: A combined frequency scaling and application elasticity approach for energy-efficient cloud computing. Sustain. Comput. Informatics Syst. 4(4), 205–214 (2014)
31.
Zurück zum Zitat Hsu, C.H., Slagter, K.D., Chen, S.C., Chung, Y.C.: Optimizing energy consumption with task consolidation in clouds. Inf. Sci. (Ny) 258, 452–462 (2014) Hsu, C.H., Slagter, K.D., Chen, S.C., Chung, Y.C.: Optimizing energy consumption with task consolidation in clouds. Inf. Sci. (Ny) 258, 452–462 (2014)
33.
Zurück zum Zitat Kashikolaei, S.M.G., Hosseinabadi, A.A.R., Saemi, B., Shareh, M.B., Sangaiah, A.K., Bin Bian, G.: An enhancement of task scheduling in cloud computing based on imperialist competitive algorithm and firefly algorithm. J. Supercomput. 76(8), 6302–6329 (2020) Kashikolaei, S.M.G., Hosseinabadi, A.A.R., Saemi, B., Shareh, M.B., Sangaiah, A.K., Bin Bian, G.: An enhancement of task scheduling in cloud computing based on imperialist competitive algorithm and firefly algorithm. J. Supercomput. 76(8), 6302–6329 (2020)
34.
Zurück zum Zitat Jena, R.K.: Multi objective task scheduling in cloud environment using nested PSO framework. Procedia Comput. Sci. 57, 1219–1227 (2015) Jena, R.K.: Multi objective task scheduling in cloud environment using nested PSO framework. Procedia Comput. Sci. 57, 1219–1227 (2015)
35.
Zurück zum Zitat Tawfeek, M.A., El-Sisi, A., Keshk, A.E., Torkey, F.A.: Cloud task scheduling based on ant colony optimization. Proc. - 2013 8th Int. Conf. Comput. Eng. Syst. ICCES 2013 12(2), 64–69 (2013) Tawfeek, M.A., El-Sisi, A., Keshk, A.E., Torkey, F.A.: Cloud task scheduling based on ant colony optimization. Proc. - 2013 8th Int. Conf. Comput. Eng. Syst. ICCES 2013 12(2), 64–69 (2013)
36.
Zurück zum Zitat Abualigah, L., Alkhrabsheh, M.: Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J. Supercomput. 78(1), 740–765 (2022) Abualigah, L., Alkhrabsheh, M.: Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J. Supercomput. 78(1), 740–765 (2022)
37.
Zurück zum Zitat Huang, X., Lin, Y., Zhang, Z., Guo, X., Su, S.: A gradient-based optimization approach for task scheduling problem in cloud computing. Cluster Comput. 25(5), 3481–3497 (2022) Huang, X., Lin, Y., Zhang, Z., Guo, X., Su, S.: A gradient-based optimization approach for task scheduling problem in cloud computing. Cluster Comput. 25(5), 3481–3497 (2022)
38.
Zurück zum Zitat B. S. and P. P. P. S.K. Mishra,: Load balancing in cloud computing: a big picture. J. King Saud Univ. – Comput Inf. Sci. 32, 149–158 (2018) B. S. and P. P. P. S.K. Mishra,: Load balancing in cloud computing: a big picture. J. King Saud Univ. – Comput Inf. Sci. 32, 149–158 (2018)
39.
Zurück zum Zitat Yang, Y., Zhou, Y., Sun, Z., Cruickshank, H.: Heuristic scheduling algorithms for allocation of virtualized network and computing resources. J. Softw. Eng. Appl. 6(1), 1–13 (2013) Yang, Y., Zhou, Y., Sun, Z., Cruickshank, H.: Heuristic scheduling algorithms for allocation of virtualized network and computing resources. J. Softw. Eng. Appl. 6(1), 1–13 (2013)
40.
Zurück zum Zitat Tsai, J.T., Fang, J.C., Chou, J.H.: Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution algorithm. Comput. Oper. Res. 40(12), 3045–3055 (2013) Tsai, J.T., Fang, J.C., Chou, J.H.: Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution algorithm. Comput. Oper. Res. 40(12), 3045–3055 (2013)
41.
Zurück zum Zitat Jansen, K., Klein, K.-M., Verschae, J.: closing the gap for makespan scheduling via sparsification techniques. Math. Oper. Res. 45(4), 1371–1392 (2020)MathSciNet Jansen, K., Klein, K.-M., Verschae, J.: closing the gap for makespan scheduling via sparsification techniques. Math. Oper. Res. 45(4), 1371–1392 (2020)MathSciNet
43.
Zurück zum Zitat Sampaio, A.M., Barbosa, J.G., Prodan, R.: PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centers. Simul. Model. Pract. Theory 57, 142–160 (2015) Sampaio, A.M., Barbosa, J.G., Prodan, R.: PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centers. Simul. Model. Pract. Theory 57, 142–160 (2015)
44.
Zurück zum Zitat Mishra, S.K., Puthal, D., Sahoo, B., Jena, S.K., Obaidat, M.S.: An adaptive task allocation technique for green cloud computing. J. Supercomput. 74(1), 370–385 (2018) Mishra, S.K., Puthal, D., Sahoo, B., Jena, S.K., Obaidat, M.S.: An adaptive task allocation technique for green cloud computing. J. Supercomput. 74(1), 370–385 (2018)
45.
Zurück zum Zitat Kumar, M., Sharma, S.C.: Load balancing algorithm to minimize the makespan time in cloud environment. World J. Mode. Simul. 14(4), 276–288 (2018) Kumar, M., Sharma, S.C.: Load balancing algorithm to minimize the makespan time in cloud environment. World J. Mode. Simul. 14(4), 276–288 (2018)
46.
Zurück zum Zitat Dorigo, M., Blum, C.: Ant colony optimization theory: a survey. Theor. Comput. Sci. 344(2–3), 243–278 (2005)MathSciNet Dorigo, M., Blum, C.: Ant colony optimization theory: a survey. Theor. Comput. Sci. 344(2–3), 243–278 (2005)MathSciNet
47.
Zurück zum Zitat Shang, Z.H., Zhang, J.W., Wang, X.H., Li, H.J., Luo, X.: Application on the problem of the improved ant colony algorithm on cloud computing scheduling. Int. J. Grid Distrib. Comput. 11(5), 79–90 (2018) Shang, Z.H., Zhang, J.W., Wang, X.H., Li, H.J., Luo, X.: Application on the problem of the improved ant colony algorithm on cloud computing scheduling. Int. J. Grid Distrib. Comput. 11(5), 79–90 (2018)
48.
Zurück zum Zitat Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider Monkey Optimization algorithm for numerical optimization. Memetic Comput. 6(1), 31–47 (2014) Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider Monkey Optimization algorithm for numerical optimization. Memetic Comput. 6(1), 31–47 (2014)
49.
Zurück zum Zitat Sharma, A., Sharma, A., Panigrahi, B.K., Kiran, D., Kumar, R.: Ageist Spider Monkey Optimization algorithm. Swarm Evol. Comput. 28, 58–77 (2016) Sharma, A., Sharma, A., Panigrahi, B.K., Kiran, D., Kumar, R.: Ageist Spider Monkey Optimization algorithm. Swarm Evol. Comput. 28, 58–77 (2016)
50.
Zurück zum Zitat Hamad, S.A., Omara, F.A.: Genetic-based task scheduling algorithm in cloud computing environment. Int. J. Adv. Comput. Sci. Appl. 7(4), 550–556 (2016) Hamad, S.A., Omara, F.A.: Genetic-based task scheduling algorithm in cloud computing environment. Int. J. Adv. Comput. Sci. Appl. 7(4), 550–556 (2016)
51.
Zurück zum Zitat Wu, X.B., Liao, J., Wang, Z.C.: Water wave optimization for the traveling salesman problem. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9225, pp. 137–146. (2015) Wu, X.B., Liao, J., Wang, Z.C.: Water wave optimization for the traveling salesman problem. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9225, pp. 137–146. (2015)
52.
Zurück zum Zitat Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 97(March), 849–872 (2019) Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 97(March), 849–872 (2019)
53.
Zurück zum Zitat Dorigo, M., Stützle, T.: The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances, pp. 250–285. (2003) Dorigo, M., Stützle, T.: The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances, pp. 250–285. (2003)
54.
Zurück zum Zitat Calheiros, R.N., Ranjan, R., Beloglazov, A., De Rose, C.A.F., Buyya, R.: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource ssprovisioning algorithms. Softw. Pract. Exp. 41(1), 23–50 (2011) Calheiros, R.N., Ranjan, R., Beloglazov, A., De Rose, C.A.F., Buyya, R.: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource ssprovisioning algorithms. Softw. Pract. Exp. 41(1), 23–50 (2011)
Metadaten
Titel
An efficient multi-objective scheduling algorithm based on spider monkey and ant colony optimization in cloud computing
verfasst von
Dina A. Amer
Gamal Attiya
Ibrahim Ziedan
Publikationsdatum
06.06.2023
Verlag
Springer US
Erschienen in
Cluster Computing / Ausgabe 2/2024
Print ISSN: 1386-7857
Elektronische ISSN: 1573-7543
DOI
https://doi.org/10.1007/s10586-023-04018-6

Weitere Artikel der Ausgabe 2/2024

Cluster Computing 2/2024 Zur Ausgabe

Premium Partner