Elsevier

Computers & Electrical Engineering

Volume 58, February 2017, Pages 154-160
Computers & Electrical Engineering

CLB: A novel load balancing architecture and algorithm for cloud services

https://doi.org/10.1016/j.compeleceng.2016.01.029Get rights and content

Abstract

Cloud services are widely used in manufacturing, logistics, digital applications, and document processing. Cloud services must be able to handle tens of thousands of concurrent requests and to enable servers to seamlessly provide the amount of load balancing capacity required to respond to incoming application traffic in addition to allowing users to obtain information quickly and accurately. In the past, researchers have proposed the use of static load balancing or server response times to evaluate load balancing capacity, a lack of which may cause a server to load unevenly. In this study, a dynamic annexed balance method is used to solve this problem. Cloud load balancing (CLB) takes into consideration both server processing power and computer loading, thus making it less likely that a server will be unable to handle excessive computational requirements. Finally, two algorithms in CLB are also addressed with experiments to prove the proposed approach is innovative.

Introduction

With the rapid development of the Internet, many vendors have started to provide cloud services. More and more services can be obtained in the cloud, and users do not have to do operations on a local computer. All operations are computed on the cloud. When a large number of users attempt to access cloud services, this often causes the server to fail to respond. Determining a method by which to provide users with timely and accurate responses is a subject worthy of advanced study. Several studies have been proposed to evaluate and to develop algorithms and load balancing methodologies for cloud-based applications. It is difficult for a server to deal effectively with the flow of information generated by all of the various enterprises attempting to access it. Excessive flow causes server overload with a subsequent loss of information. A server load balancing mechanism can disperse the transmission of information flow and data operations and can also reduce the probability of increased computational time and loss of information. When one server fails in the cloud, the cloud services can be transferred to another server. Services are therefore non-stop. The advantages of server load balance include:

  • 1.

    Efficiency: improve the efficient use of the server and network bandwidth.

  • 2.

    Reliability and Safety: improve the reliability and security of the server.

  • 3.

    Scalability: improve the scalability of services.

Currently, there are several types of load balancing techniques for cloud-based applications. A server load balancing method is divided into two types of hardware and software:

  • 1.

    Hardware: Fourth layer network switches provide server redundancy and load balancing.

  • 2.

    Software: Used to calculate server usage and to allocate resources.

Regardless of whether software or hardware is used, the load balancing method is divided into two types: dynamic and static, which are explained as follows:

  • 1.

    Static: Used to deal with predictable processing loads in advance.

  • 2.

    Dynamic: Used to deal with unpredictable processing loads. Based on network storage virtualization, the host and storage devices use fiber channel switches that link together, and all virtualization requests return to the network storage device. This method does not depend on the operating system.

Static load balancing uses the polling approach (also called the Round Robin approach). This approach is sequentially assigned to each host, as shown in Fig. 1. This method is simple and uses fewer resources, but is usually unable to detect the attached server, resulting in annexation or uneven distribution. Shadrach proposed an RTSLB algorithm based on a weighted metric and compared the results to three algorithms, including a random load distribution algorithm, a Round Robin load distribution algorithm and a competitive learning algorithm [1]. A comparison of existing load balancing techniques for current research hotspots was proposed by Raghava [2] and is listed as Table 1.

Based on previous studies on this topic [13], we propose a new paradigm for load balancing architecture and a new algorithm, which can be applied to both virtual web servers and physical servers. There was several discussions of load balancing techniques for hardware and software, including grid-based, microprocessor-based, wireless sensor network-based approaches [14], [15], [16], [17], [18]. In addition, we discuss more current approaches to load balancing techniques for cloud-based applications, which are listed in Table 2.

The rest of the paper is organized as follows: The introduction and earlier studies on cloud load balance are briefly summarized in Section 1. The methodology of the proposed segmentation is detailed in Section 2 to Section 4. Experimental results and discussions are shown in Section 5. Finally, conclusions are drawn in Section 6.

Section snippets

Dynamic load balance for web servers

A dynamic load balancing method is based on tasks related to processing time and the computational power consumption of each server as well as other factors that determine the allocation of users to different servers in order for them to obtain services. Because the processing time for tasks will lead to an uneven distribution of the server, scholars have used three different dynamic equilibrium algorithms to solve this problem.

Weighted scheduling

The weighted load balancing method is where the server connections deal with performance or the weighted operation of service providers, and the results are shown as a load balancing priority weight. Chang [5], [6] proposed the WRR-Dynamic algorithm using the M/M/1 Model [4], with the arrival rate and service rate calculation of the weight values as shown in Formulae (1) and (2), and where the symbols represent values as shown in Table 4. However, this study did not take into account each

The cloud load balance architecture

In this study, the design of the cloud load balancing (CLB) mechanism, the structure shown in Fig. 1, is divided into five layers:

Level 1: Users of the cloud service request, from the PS results from the X, is able to pick the server before the big to the small sort based on the network packet flow.

Level 2: Cloud load balance monitoring platform (CLBMP)

  • (1)

    Detects all service loads and determines whether the service is online.

  • (2)

    Sorts the server load multiplied by the weight.

  • (3)

    Stores the sorted load

Experiment results and discussions

In this study, a new cloud load balancing mechanism is proposed. This method can be used to calculate server processing power and can also load and obtain PS values. The testing server uses OS Windows7; the Programming Language is MS Visual Studio 2010 C# with the MS SQL Server 2005 database. The Server is Internet Information Services 2.0 with a RAM of 4,096 MB and an Intel T2390/1.86 GHz CPU.

Cloud load balancing distribution platforms are based on the PS value of the user assigned to the most

Conclusions

In this study, a new cloud load balancing mechanisms is proposed by comparing previous studies. The proposed new paradigm CLB for load balancing architecture and an algorithm can be applied to both virtual web servers and physical servers. From the experiments for CLB-enabled physical servers and virtual servers, the results show that cloud server performance based on the architecture proposed in this study can balance the loading performance when users logged in at the same time.

Acknowledgments

The authors thank the Ministry of Science and Technology No. MOST-103-2221-E-006 -085 -, who sponsored this research and provided related technological support. Due to support from the Ministry of Science and Technology, this research could proceed smoothly.

Chen Shang-Liang received the PhD degree in Mechanical Engineering from University of Liverpool. He is currently a Professor with the Institute of Manufacturing and Information Systems at National Cheng-Kung University, Taiwan. He has published more than 70 papers in international peer reviewed journals. His research interests are in the areas of information and mechatronic integration, intelligent remote Monitoring System, PC-based multi-axis controller design, and CAD/CAM.

References (20)

There are more references available in the full text version of this article.

Cited by (0)

Chen Shang-Liang received the PhD degree in Mechanical Engineering from University of Liverpool. He is currently a Professor with the Institute of Manufacturing and Information Systems at National Cheng-Kung University, Taiwan. He has published more than 70 papers in international peer reviewed journals. His research interests are in the areas of information and mechatronic integration, intelligent remote Monitoring System, PC-based multi-axis controller design, and CAD/CAM.

Yun-Yao Chen received the Ph.D. degree in Institute of Manufacturing Information and Systems, National Cheng Kung University, Taiwan. His current research interests include Computer Integrated Manufacturing, Cloud-based Applications and Internet of Things. He has published more than 60 journal & conference research papers. He is currently a computer integrated manufacturing supervisor of Taiwan Semiconductor Manufacturing Company (TSMC).

Suang-Hong Kuo received the master degree in Institute of Manufacturing Information and Systems, National Cheng Kung University, Taiwan.

Reviews processed and recommended for publication to the Editor-in-Chief by Guest Editor Dr. W-H Hsieh.

View full text