Skip to main content
Top

2018 | OriginalPaper | Chapter

Scheduling in Parallel and Distributed Computing Systems

Authors : Srishti Srivastava, Ioana Banicescu

Published in: Topics in Parallel and Distributed Computing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recent advancements in computing technology have increased the complexity of computational systems and their ability to solve larger and more complex scientific problems. Scientific applications express solutions to complex scientific problems, which often are data-parallel and contain large loops. The execution of such applications in parallel and distributed computing (PDC) environments is computationally intensive and exhibits an irregular behavior, in general due to variations of algorithmic and systemic nature. A parallel and distributed system has a set of defined policies for the use of its computational resources. Distribution of input data onto the PDC resources is dependent on these defined policies. To reduce the overall performance degradation, mapping applications tasks onto PDC resources requires parallelism detection in the application, partitioning of the problem into tasks, distribution of tasks onto parallel and distributed processing resources, and scheduling the task execution on the allocated resources. Most scheduling policies include provisions for minimizing communication among application tasks, minimizing load imbalance, and maximizing fault tolerance. Often these techniques minimize idle time, overloading resources with jobs and control overheads. Over the years, a number of scheduling techniques have been developed and exploited to address the challenges in parallel and distributed computing. In addition, these scheduling algorithms have been classified based on a taxonomy for an understanding and comparison of the different schemes. These techniques have broadly been classified into static and dynamic techniques. The static techniques are helpful in minimizing the individual task’s response time and do not have an overhead for information gathering. However, they require prior knowledge of the system and they cannot address unpredictable changes during runtime. On the other hand, the dynamic techniques have been developed to address unpredictable changes, and maximize resource utilization at the cost of information gathering overhead. Furthermore, the scheduling algorithms have also been characterized as optimal or sub-optimal, cooperative or non-cooperative, and approximate or heuristic. This chapter provides content on scheduling in parallel and distributed computing, and a taxonomy of existing (early and recent) scheduling methodologies.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference H. El-Rewini, T. G. Lewis, and H. H. Ali, Task Scheduling in Parallel and Distributed Systems. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1994. H. El-Rewini, T. G. Lewis, and H. H. Ali, Task Scheduling in Parallel and Distributed Systems. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1994.
2.
go back to reference N. Immerman, “Expressibility and parallel complexity,” SIAM Journal on Computing, vol. 18, no. 3, pp. 625–638, 1989.MathSciNetCrossRef N. Immerman, “Expressibility and parallel complexity,” SIAM Journal on Computing, vol. 18, no. 3, pp. 625–638, 1989.MathSciNetCrossRef
3.
go back to reference J. C. Wyllie, “The complexity of parallel computations,” Cornell University, Tech. Rep., 1979. J. C. Wyllie, “The complexity of parallel computations,” Cornell University, Tech. Rep., 1979.
4.
go back to reference E. G. Coffman and J. L. Bruno, Computer and job-shop scheduling theory. John Wiley & Sons, 1976. E. G. Coffman and J. L. Bruno, Computer and job-shop scheduling theory. John Wiley & Sons, 1976.
6.
go back to reference D. Fernandez-Baca, “Allocating modules to processors in a distributed system,” IEEE Transactions on Software Engineering, vol. 15, no. 11, pp. 1427–1436, Nov 1989.CrossRef D. Fernandez-Baca, “Allocating modules to processors in a distributed system,” IEEE Transactions on Software Engineering, vol. 15, no. 11, pp. 1427–1436, Nov 1989.CrossRef
7.
go back to reference R. Gupta and M. L. Soffa, “Region scheduling: An approach for detecting and redistributing parallelism,” Software Engineering, IEEE Transactions on, vol. 16, no. 4, pp. 421–431, 1990.CrossRef R. Gupta and M. L. Soffa, “Region scheduling: An approach for detecting and redistributing parallelism,” Software Engineering, IEEE Transactions on, vol. 16, no. 4, pp. 421–431, 1990.CrossRef
8.
go back to reference J. A. Fisher, “Trace scheduling: A technique for global microcode compaction,” IEEE Transactions on Computers, vol. C-30, no. 7, pp. 478–490, July 1981.CrossRef J. A. Fisher, “Trace scheduling: A technique for global microcode compaction,” IEEE Transactions on Computers, vol. C-30, no. 7, pp. 478–490, July 1981.CrossRef
9.
go back to reference M. D. McCool, A. D. Robison, and J. Reinders, Structured parallel programming: patterns for efficient computation. Elsevier, 2012. M. D. McCool, A. D. Robison, and J. Reinders, Structured parallel programming: patterns for efficient computation. Elsevier, 2012.
10.
go back to reference H. Hoffmann, A. Agarwal, and S. Devadas, “Partitioning strategies for concurrent programming,” MIT Open Access Articles, 2009. H. Hoffmann, A. Agarwal, and S. Devadas, “Partitioning strategies for concurrent programming,” MIT Open Access Articles, 2009.
11.
go back to reference V. Sarkar, Partitioning and Scheduling Parallel Programs for Multiprocessors. Cambridge, MA, USA: MIT Press, 1989.MATH V. Sarkar, Partitioning and Scheduling Parallel Programs for Multiprocessors. Cambridge, MA, USA: MIT Press, 1989.MATH
12.
go back to reference T. L. Casavant and J. G. Kuhl, “A taxonomy of scheduling in general-purpose distributed computing systems,” IEEE Transactions on Software Engineering, vol. 14, no. 2, pp. 141–154, Feb 1988.CrossRef T. L. Casavant and J. G. Kuhl, “A taxonomy of scheduling in general-purpose distributed computing systems,” IEEE Transactions on Software Engineering, vol. 14, no. 2, pp. 141–154, Feb 1988.CrossRef
13.
go back to reference R. Mehrotra, S. Srivastava, I. Banicescu, and S. Abdelwahed, “Towards an autonomic performance management approach for a cloud broker environment using a decomposition-coordination based methodology,” Future Generation Comp. Syst., vol. 54, pp. 195–205, 2016. [Online]. Available: http://dx.doi.org/10.1016/j.future.2015.03.020 R. Mehrotra, S. Srivastava, I. Banicescu, and S. Abdelwahed, “Towards an autonomic performance management approach for a cloud broker environment using a decomposition-coordination based methodology,” Future Generation Comp. Syst., vol. 54, pp. 195–205, 2016. [Online]. Available: http://​dx.​doi.​org/​10.​1016/​j.​future.​2015.​03.​020
14.
go back to reference A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System Concepts, 9th ed. Wiley Publishing, 2009. A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System Concepts, 9th ed. Wiley Publishing, 2009.
15.
go back to reference S. Srivastava, I. Banicescu, F. M. Ciorba, and W. E. Nagel, “Enhancing the functionality of a gridsim-based scheduler for effective use with large-scale scientific applications,” in 2011 10th International Symposium on Parallel and Distributed Computing, July 2011, pp. 86–93. S. Srivastava, I. Banicescu, F. M. Ciorba, and W. E. Nagel, “Enhancing the functionality of a gridsim-based scheduler for effective use with large-scale scientific applications,” in 2011 10th International Symposium on Parallel and Distributed Computing, July 2011, pp. 86–93.
16.
go back to reference A. R. Hurson, J. T. Lim, K. M. Kavi, and B. Lee, “Parallelization of doall and doacross loops - a survey,” Advances in computers, vol. 45, pp. 53–103, 1997.CrossRef A. R. Hurson, J. T. Lim, K. M. Kavi, and B. Lee, “Parallelization of doall and doacross loops - a survey,” Advances in computers, vol. 45, pp. 53–103, 1997.CrossRef
17.
go back to reference I. Banicescu and V. Velusamy, “Load balancing highly irregular computations with the adaptive factoring,” in Parallel and Distributed Processing Symposium., Proceedings International, IPDPS 2002, Abstracts and CD-ROM, April 2002, pp. 12 pp–. I. Banicescu and V. Velusamy, “Load balancing highly irregular computations with the adaptive factoring,” in Parallel and Distributed Processing Symposium., Proceedings International, IPDPS 2002, Abstracts and CD-ROM, April 2002, pp. 12 pp–.
18.
go back to reference I. Banicescu, V. Velusamy, and J. Devaprasad, “On the scalability of dynamic scheduling scientific applications with adaptive weighted factoring,” Cluster Computing, vol. 6, no. 3, pp. 215–226, 2003.CrossRef I. Banicescu, V. Velusamy, and J. Devaprasad, “On the scalability of dynamic scheduling scientific applications with adaptive weighted factoring,” Cluster Computing, vol. 6, no. 3, pp. 215–226, 2003.CrossRef
19.
go back to reference R. Cariño, I. Banicescu, T. Rauber, and G. Rünger, “Dynamic loop scheduling with processor groups.” in ISCA PDCS, 2004, pp. 78–84. R. Cariño, I. Banicescu, T. Rauber, and G. Rünger, “Dynamic loop scheduling with processor groups.” in ISCA PDCS, 2004, pp. 78–84.
22.
go back to reference T. Helmy and Z. Rasheed, “Independent job scheduling by fuzzy c-mean clustering and an ant optimization algorithm in a computation grid.” IAENG International Journal of Computer Science, vol. 37, no. 2, 2010. T. Helmy and Z. Rasheed, “Independent job scheduling by fuzzy c-mean clustering and an ant optimization algorithm in a computation grid.” IAENG International Journal of Computer Science, vol. 37, no. 2, 2010.
23.
go back to reference S. Selvarani and G. S. Sadhasivam, “Improved job-grouping based pso algorithm for task scheduling in grid computing,” International Journal of Engineering Science and Technology, vol. 2, no. 9, pp. 4687–4695, 2010. S. Selvarani and G. S. Sadhasivam, “Improved job-grouping based pso algorithm for task scheduling in grid computing,” International Journal of Engineering Science and Technology, vol. 2, no. 9, pp. 4687–4695, 2010.
24.
go back to reference M. Yusof, K. Badak, and M. Stapa, “Achieving of tabu search algorithm for scheduling technique in grid computing using gridsim simulation tool: multiple jobs on limited resource,” Int J Grid Distributed Comput, vol. 3, no. 4, pp. 19–32, 2010. M. Yusof, K. Badak, and M. Stapa, “Achieving of tabu search algorithm for scheduling technique in grid computing using gridsim simulation tool: multiple jobs on limited resource,” Int J Grid Distributed Comput, vol. 3, no. 4, pp. 19–32, 2010.
25.
go back to reference R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer systems, vol. 25, no. 6, pp. 599–616, 2009.CrossRef R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer systems, vol. 25, no. 6, pp. 599–616, 2009.CrossRef
26.
go back to reference Z.-H. Zhan, X.-F. Liu, Y.-J. Gong, J. Zhang, H. S.-H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys (CSUR), vol. 47, no. 4, p. 63, 2015.CrossRef Z.-H. Zhan, X.-F. Liu, Y.-J. Gong, J. Zhang, H. S.-H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys (CSUR), vol. 47, no. 4, p. 63, 2015.CrossRef
27.
go back to reference B. Speitkamp and M. Bichler, “A mathematical programming approach for server consolidation problems in virtualized data centers,” IEEE Transactions on Services Computing, vol. 3, no. 4, pp. 266–278, Oct 2010.CrossRef B. Speitkamp and M. Bichler, “A mathematical programming approach for server consolidation problems in virtualized data centers,” IEEE Transactions on Services Computing, vol. 3, no. 4, pp. 266–278, Oct 2010.CrossRef
28.
go back to reference H. N. Van, F. D. Tran, and J. M. Menaud, “Performance and power management for cloud infrastructures,” in 2010 IEEE 3rd International Conference on Cloud Computing, July 2010, pp. 329–336. H. N. Van, F. D. Tran, and J. M. Menaud, “Performance and power management for cloud infrastructures,” in 2010 IEEE 3rd International Conference on Cloud Computing, July 2010, pp. 329–336.
29.
go back to reference T. A. L. Genez, L. F. Bittencourt, and E. R. M. Madeira, “Workflow scheduling for saas / paas cloud providers considering two sla levels,” in 2012 IEEE Network Operations and Management Symposium, April 2012, pp. 906–912. T. A. L. Genez, L. F. Bittencourt, and E. R. M. Madeira, “Workflow scheduling for saas / paas cloud providers considering two sla levels,” in 2012 IEEE Network Operations and Management Symposium, April 2012, pp. 906–912.
30.
go back to reference V. Roberge, M. Tarbouchi, and G. Labonte, “Comparison of parallel genetic algorithm and particle swarm optimization for real-time uav path planning,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 132–141, Feb 2013.CrossRef V. Roberge, M. Tarbouchi, and G. Labonte, “Comparison of parallel genetic algorithm and particle swarm optimization for real-time uav path planning,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 132–141, Feb 2013.CrossRef
31.
go back to reference M. A. Rodriguez and R. Buyya, “Deadline based resource provisioning and scheduling algorithm for scientific workflows on clouds,” IEEE Transactions on Cloud Computing, vol. 2, no. 2, pp. 222–235, April 2014.CrossRef M. A. Rodriguez and R. Buyya, “Deadline based resource provisioning and scheduling algorithm for scientific workflows on clouds,” IEEE Transactions on Cloud Computing, vol. 2, no. 2, pp. 222–235, April 2014.CrossRef
32.
go back to reference Y. L. Li, Z. H. Zhan, Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411–1423, June 2016.CrossRef Y. L. Li, Z. H. Zhan, Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411–1423, June 2016.CrossRef
Metadata
Title
Scheduling in Parallel and Distributed Computing Systems
Authors
Srishti Srivastava
Ioana Banicescu
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-93109-8_11

Premium Partner