skip to main content
research-article

A survey of task mapping on production grids

Published:03 July 2013Publication History
Skip Abstract Section

Abstract

Grids designed for computationally demanding scientific applications started experimental phases ten years ago and have been continuously delivering computing power to a wide range of applications for more than half of this time. The observation of their emergence and evolution reveals actual constraints and successful approaches to task mapping across administrative boundaries. Beyond differences in distributions, services, protocols, and standards, a common architecture is outlined. Application-agnostic infrastructures built for resource registration, identification, and access control dispatch delegation to grid sites. Efficient task mapping is managed by large, autonomous applications or collaborations that temporarily infiltrate resources for their own benefits.

References

  1. Anderlik, C., Gregersen, A. R., Kleist, J., Peters, A., and Saiz, P. 2007. Alice - arc integration. In Proceedings of the International Conference on Computing in High Energy Physics.Google ScholarGoogle Scholar
  2. Anderson, D. P. 2003. Public computing: Reconnecting people to science. In Proceedings of the Conference on Shared Knowledge and the Web.Google ScholarGoogle Scholar
  3. Andreetto, P. 2004. Practical approaches to grid workload and resource management in the egee project. In Proceedings of the Conference on Computing in High Energy and Nuclear Physics (CHEP'04). Vol. 2, 899--902.Google ScholarGoogle Scholar
  4. Andreozzi, S., Burke, S., Donno, F., Field, L., Fisher, S., Jensen, J., Konya, B., Litmaath, M., Mambelli, M., Schopf, J. M., Viljoen, M., Wilson, A., and Zappi, R. 2007. Glue schema specification version 1.3. http://glueschema.forge.cnaf.infn.it/Spec/V13.Google ScholarGoogle Scholar
  5. Annis, J., Zhao, Y., Voeckler, J., Wilde, M., Kent, S., and Foster, I. 2002. Applying chimera virtual data concepts to cluster finding in the sloan sky survey. In Proceedings of the ACM/IEEE Conference on Supercomputing (Supercomputing'02). IEEE Computer Society Press, Los Alamitos, CA. 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anstreicher, K. M., Brixius, N. W., Goux, J.-P., and Linderoth, J. 2000. Solving large quadratic assignment problems on computational grids. Tech. rep., MetaNEOS project, Iowa City, IA.Google ScholarGoogle Scholar
  7. Asadzadeh, P., Buyya, R., Kei, C. L., Nayar, D., and Venugopal, S. 2004. Global grids and software toolkits: A study of four grid middleware technologies. Tech. rep. GRIDS-TR-2004-5, Grid Computing and Distributed Systems Laboratory, University of Melbourne, Australia. July 1.Google ScholarGoogle Scholar
  8. Avery, P. 2007. Open science grid: Building and sustaining general cyberinfrastructure using a collaborative approach. First Monday 12, 6.Google ScholarGoogle ScholarCross RefCross Ref
  9. Baranovski, A., Garzoglio, G., Kreymer, A., Lueking, L., Murthi, V., Mhashikar, P., Ratnikov, F., Roy, A., Rockwell, T., Tannenbaum, S. S. T., Terekhov, I., Walker, R., and Wuerthwein, F. 2003. Management of grid jobs and information within samgrid. In Proceedings of the U.K. e-Science All Hands Meeting.Google ScholarGoogle Scholar
  10. Beckles, B., Son, S., and Kewley, J. 2005. Current methods for negotiating firewalls for the condor system. In Proceedings of the 4th U.K. e-Science All Hands Meeting.Google ScholarGoogle Scholar
  11. Belforte, S., Hsu, S.-C., Lipeles, E., Norman, M., Thwein, F. W., Lucchesi, D., Sarkar, S., and Sfiligoi, I. 2006. Glidecaf: A late binding approach to the grid. In Proceedings of the Conference on Computing in High Energy and Nuclear Physics (CHEP'06).Google ScholarGoogle Scholar
  12. Blanco, C. V., Huedo, E., Montero, R. S., and Llorente, I. M. 2009. Dynamic provision of computing resources from grid infrastructures and cloud providers. In Proceedings of the Workshops at the Grid and Pervasive Computing Conference (GPC'09). IEEE Computer Society, Washington, DC, 113--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Blazewicz, J., Drozdowski, M., and Markiewicz, M. 1999. Divisible task scheduling concept and verification. Parallel Comput. 25, 1, 87--98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Bode, B., Halstead, D. M., Kendall, R., Lei, Z., and Jackson, D. 2000. The portable batch scheduler and the maui scheduler on linux clusters. In Proceedings of the 4th Annual Showcase & Conference (LINUX-00). The USENIX Association, Berkeley, CA, 217--224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Bouziane, H. L., Pérez, C., and Priol, T. 2010. Extending software component models with the master-worker paradigm. Parallel Comput. 36, 86--103. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Buncic, P., Peters, A. J., and Saiz, P. 2003. The alien system, status and perspectives. In Proceedings of the Conference on Computing in High-Energy Physics.Google ScholarGoogle Scholar
  17. Buncic, P., Peters, A. J., Saiz, P., and Grosse-Oetringhaus, J. 2004. The architecture of the alien system. In Proceedings of the Conference on Computing in High Energy and Nuclear Physics (CHEP'04).Google ScholarGoogle Scholar
  18. Casajus, A., Graciani, R., Paterson, S., Tsaregorodtsev, A., and the Lhcb Dirac Team. 2010. Dirac pilot framework and the dirac workload management system. J. Physics: Conf. Series 219, 6, 062049.Google ScholarGoogle ScholarCross RefCross Ref
  19. Casavant, T. L. and Kuhl, J. G. 1988. A taxonomy of scheduling in general-purpose distributed computing systems. IEEE Trans. Softw. Eng. 14, 2, 141--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Castagnera, K., Cheng, D., Fatoohi, R., Hook, E., Kramer, B., Manning, C., Musch, J., Niggley, C., Saphir, W., Sheppard, D., Smith, M., Stockdale, I., Welch, S., Williams, R., and Yip, D. 1994. Nas experiences with a prototype cluster of workstations. In Proceedings of the ACM/IEEE Conference on Supercomputing (Supercomputing'94). ACM Press, New York, NY, 410--419. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Ceruzzi, P. E. 1994. From batch to interactive: The evolution of computing systems, 1957-1969. In Proceedings of the IFIP 13th World Computer Congress. 279--284.Google ScholarGoogle Scholar
  22. Chen, W.-N. and Zhang, J. 2009. An ant colony optimization approach to a grid workflow scheduling problem with various qos requirements. Trans. Sys. Man Cyber Part C 39, 29--43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Cherkasova, L., Gupta, D., Ryabinkin, E., Kurakin, R., Dobretsov, V., and Vahdat, A. 2006. Optimizing grid site manager performance with virtual machines. In Proceedings of the 3rd USENIX Workshop on Real Large Distributed Systems (WORLDS'06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Chien, A. A. 2004. The Grid 2. Computing Elements 2nd Ed. Morgan Kaufman, Burlington, MA, Chapter 28, 567--591.Google ScholarGoogle Scholar
  25. Codispoti, G., Grandi, G., Fanfani, A., Spiga, D., Cinquilli, M., Farina, F., Miccio, E., Fanzago, F., Sciabà, A., and Lacaprara, S. E. A. 2009. Use of the glite-wms in cms for production and analysis. In Proceedings of the 17th International Conference on Computing in High Energy and Nuclear Physics.Google ScholarGoogle Scholar
  26. Crowcroft, J. A., Hand, S. M., Harris, T. L., Herbert, A. J., Parker, M. A., and Pratt, I. A. 2008. Futuregrid: A program for long-term research into grid systems architecture. Tech. rep., University of Cambridge.Google ScholarGoogle Scholar
  27. Czaijkowski, K., Foster, I., and Kesselman, C. 2004. Resource and Service Management 2nd Ed. Morgan Kaufman, Burlington, MA, Chapter 18, 259--283.Google ScholarGoogle Scholar
  28. Czajkowski, K., Foster, I., Kesselman, C., Sander, V., S, V., and Tuecke, S. 2002. Snap: A protocol for negotiating service level agreements and coordinating resource management in distributed systems. In Proceedings of the 8th Workshop on Job Scheduling Strategies for Parallel Processing. 153--183. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Czajkowski, K., Foster, I. T., and Kesselman, C. 1999. Resource co-allocation in computational grids. In Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing (HPDC'99). IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Deelman, E., Singh, G., Su, M.-H., Blythe, J., Gil, Y., Kesselman, C., Mehta, G., Vahi, K., Berriman, G. B., Good, J., Laity, A., Jacob, J. C., and Katz, D. S. 2005. Pegasus: A framework for mapping complex scientific workflows onto distributed systems. Sci. Program. 13, 3, 219--237. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ellert, M., Gronager, M., Konstantinov, A., Kónya, B., Lindemann, J., Livenson, I., Nielsen, J. L., Niinimäki, M., Smirnova, O., and Wäänänen, A. 2007. Advanced resource connector middleware for lightweight computational grids. Future Gener. Comput. Syst. 23, 2, 219--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Elmroth, E. and Tordsson, J. 2009. A standards-based grid resource brokering service supporting advance reservations, coallocation, and cross-grid interoperability. Concurrency Comput. Pract. Exp. 21, 18, 2298--2335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Fiuczynski, M. E. 2006. Planetlab: Overview, history, and future directions. Oper. Syst. Rev. 40, 1, 6--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Foster, I., Kesselman, C., Nick, J., and Tuecke, S. 2002. The physiology of the grid: An open grid services architecture for distributed systems integration. In Proceedings of the 4th IEEE/ACM International Symposium on Cluster Computing and the Grid.Google ScholarGoogle Scholar
  35. Foster, I., Kesselman, C., and Tuecke, S. 2001. The anatomy of the grid: Enabling scalable virtual organizations. Int. J. Supercomput. Appl. 15, 3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Foster, I. T. 2005. Service oriented science. Science 308, 5723, 214--217.Google ScholarGoogle Scholar
  37. Foster, I. T. 2006. Globus toolkit version 4: Software for service-oriented systems. In Proceedings of the FIP International Conference on Network and Parallel Computing. Lecture Notes in Computer Science, vol. 3779, Springer-Verlag, 2--13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Frey, J., Tannenbaum, T., Foster, I., Livny, M., and Tuecke, S. 2002. Condor-G: A computation management agent for multi-institutional grids. Cluster Comput. 5, 237--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Georgiou, Y. and Richard, O. 2009. Grid5000: An experimental grid platform for computer science. Tech. rep., MESCAL, Laboratoire Informatique et Distribution (ID)-IMAG.Google ScholarGoogle Scholar
  40. Glatard, T. and Camarasu-Pop, S. 2010. Modelling pilot-job applications on production grids. In Proceedings of the International Conference on Parallel Processing (Euro-Par'09). Springer-Verlag, Berlin, Heidelberg, 140--149. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Glatard, T., Lingrand, D., Montagnat, J., and Riveill, M. 2007a. Impact of the execution context on grid job performances. In Proceedings of the International Workshop on Context-Awareness and Mobility in Grid Computing (WCAMG'07). IEEE, 713--718. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Glatard, T., Montagnat, J., and Pennec, X. 2007b. Optimizing jobs timeouts on clusters and production grids. In Proceedings of the International Symposium on Cluster Computing and the Grid (CCGrid'07). IEEE, 100--107. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Goux, J.-P., Linderoth, J., and Yoder, M. 2000. Metacomputing and the master-worker paradigm. Tech. rep., Argonne National Labs. Oct. 17.Google ScholarGoogle Scholar
  44. Graham, G., Cavanaugh, R., Couvares, P., Smet, A. D., and Livny, M. 2004. The Grid 2. Distributed Data Analysis: Federated Computing for High-Energy Physics 2nd Ed. Morgan Kaufman, Berlington, MA, Chapter 10, 136--145.Google ScholarGoogle Scholar
  45. Grehant, X., Pernet, O., Jarp, S., Demeure, I., and Toft, P. 2007. Xen management with smartfrog: On-demand supply of heterogeneous, synchronized execution environments. In Proceedings of the Workshop on Virtualization in High-Performance Cluster and Grid Computing (VHPC'07). Lecture Notes in Computer Science, vol. 4854, Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Hart, D. L. 2011. Measuring teragrid: Workload characterization for a high-performance computing federation. Int. J. High Perform. Comput. Appl. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Harwood, A. 2003. Networks and Parallel Processing Complexity. Melbourne School of Engineering, Department of Computer Science and Software Engineering.Google ScholarGoogle Scholar
  48. Humphrey, M. 2006. Altair's PBS - altair's PBS professional update. In Proceedings of the ACM/IEEE Conference on Super Computing (SC). ACM Press, 28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Kasam, V., Zimmermann, M., Maass, A., Schwichtenberg, H., Wolf, A., Jacq, N., Breton, V., and Hofmann-Apitius, M. 2007. Design of new plasmepsin inhibitors: A virtual high throughput screening approach on the egee grid. J. Chem. Inf. Model. 47, 5, 1818--28.Google ScholarGoogle ScholarCross RefCross Ref
  50. Keahey, K., Doering, K., and Foster, I. 2004. From sandbox to playground: Dynamic virtual environments in the grid. In Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing (GRID'04). IEEE Computer Society, Washington, DC, 34--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Kim, H., el Khamra, Y., Jha, S., and Parashar, M. 2009. An autonomic approach to integrated hpc grid and cloud usage. In Proceedings of the 5th IEEE International Conference on e-Science (E-Science'09). IEEE Computer Society, Washington, DC, 366--373. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Kim, J.-K., Shivle, S., Siegel, H. J., Maciejewski, A. A., Braun, T. D., Schneider, M., Tideman, S., Chitta, R., Dilmaghani, R. B., Joshi, R., Kaul, A., Sharma, A., Sripada, S., Vangari, P., and Yellampalli, S. S. 2007. Dynamically mapping tasks with priorities and multiple deadlines in a heterogeneous environment. J. Parallel Distrib. Comput. 67, 2, 154--169. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Kranzlmuller, D. 2009. The future european grid infrastructure roadmap and challenges. In Proceedings of the Information Technology Interfaces.Google ScholarGoogle ScholarCross RefCross Ref
  54. Laure, E., Hemmer, F., Aimar, A., Barroso, M., Buncic, P., Meglio, A. D., Guy, L., Kunszt, P., Beco, S., Pacini, F., Prelz, F., Sgaravatto, M., Edlund, A., Mulmo, O., Groep, D., Fisher, S., and Livny, M. 2004. Middleware for the next generation grid infrastructure. In Proceedings of Computing in High Energy Physics.Google ScholarGoogle Scholar
  55. Lee, H.-C., Ho, L.-Y., Chen, H.-Y., Wu, Y.-T., and Lin, S. C. 2006. Efficient handling of large scale in-silico screening using diane. In Proceedings of the Enabling Grids for E-Science Conference (EGEE'06).Google ScholarGoogle Scholar
  56. Legrand, A., Su, A., and Vivien, F. 2006. Minimizing the stretch when scheduling flows of biological requests. In Proceedings of the Annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'06). ACM Press, New York, NY, 103--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Lin, S. C. and Yen, E., Eds. 2010. Data Driven E-science: Use Cases and Successful Applications of Distributed. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Litmaath, M. 2007. Glite job submission chain v.1.2. http://litmaath.home.cern.ch/litmaath/UI-WMS-CE-WN.Google ScholarGoogle Scholar
  59. Maigne, L., Hill, D., Calvat, P., Breton, V., Reuillon, R., Lazaro, D., Legré, Y., and Donnarieix, D. 2004. Parallelization of monte carlo simulations and submission to a grid environment. Parallel Process. Lett. J. 14, 2, 177--196.Google ScholarGoogle ScholarCross RefCross Ref
  60. Machado, M. 2004. Enable existing applications for grid: Batch anywhere, independent concurrent batch, and parallel batch. Tech. rep., IBM. June.Google ScholarGoogle Scholar
  61. Milojičić, D. S., Douglis, F., Paindaveine, Y., Wheeler, R., and Zhou, S. 2000. Process migration. ACM Comput. Surv. 32, 3, 241--299. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Miura, K. 2006. Overview of japanese science grid project naregi. Progress Informatic 3, 1349-8614, 67--75.Google ScholarGoogle Scholar
  63. Moscicki, J. T. 2006. Efficient job handling in the grid: short deadline, interactivity, fault tolerance and parallelism. In Proceedings of the EGEE User Forum.Google ScholarGoogle Scholar
  64. Moscicki, J. 2003. Diane - distributed analysis environment for grid-enabled simulation and analysis of physics data. In Proceedings of the Nuclear Science Symposium Conference on Record (NSS). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  65. Moscicki, J., Lee, H. C., Guatelli, S., Lin, S., and Pia, M. G. 2004. Biomedical applications on the grid: Efficient management of parallel jobs. In Proceedings of the IEEE Nuclear Science Symposium Conference on Record (NSS). IEEE.Google ScholarGoogle Scholar
  66. Moscicki, J., Lamanna, M., Bubak, M., and Slootb, P. 2011. Processing moldable tasks on the grid: Late job binding with lightweight user-level overlay. Int. J. Grid Comput. Theory, Methods Appl. (FGCS).Google ScholarGoogle Scholar
  67. Moscicki, J. 2011. Understanding and mastering dynamics in computing grids: Processing moldable tasks with user-level overlay. Ph.D. dissertation, FNWI: Informatics Institute (II).Google ScholarGoogle Scholar
  68. Nakada, H., Yamada, M., Itou, Y., Matsuoka, S., Frey, J., and Nakano, Y. 2005. Design and implementation of condor-unicore bridge. In Proceedings of the 18th International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA'05). IEEE Computer Society, Washington, DC, 307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Nilsson, P. 2007. Experience from a pilot based system for atlas. In Proceedings of the Computing in High Energy Physics (CHEP'07).Google ScholarGoogle Scholar
  70. Nishandar, A., Levine, D., Jain, S., Garzoglio, G., and Terekhov, I. 2005. Extending the cluster-grid interface using batch system abstraction and idealization. In Proceedings of the International Symposium on Cluster, Cloud and Grid Computing.Google ScholarGoogle Scholar
  71. Paterson, S., Soler, P., and Parkes, C. 2006. Lhcb distributed data analysis on the computing grid. Ph.D. dissertation, University of Glasgow, Scotland.Google ScholarGoogle Scholar
  72. Pennington, R. 2002. Terascale clusters and the teragrid. In Proceedings of the International Conference on for High Performance Computing and Grid Asia in Asia Pacific Region. 407--413.Google ScholarGoogle Scholar
  73. Peterson, L., Bavier, A., Fiuczynski, M., Muir, S., and Roscoe, T. 2005. Towards a comprehensive PlanetLab architecture. Tech. rep. PDN--05--030, PlanetLab Consortium. June.Google ScholarGoogle Scholar
  74. Raman, R., Livny, M., and Solomon, M. 1998. Matchmaking: Distributed resource management for high throughput computing. In Proceedings of the 7th IEEE International Symposium on High Performance Distributed Computing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Ranjan, R. 2007. Coordinated resource provisioning in federated grids. Ph.D. dissertation, The University of Melbourne, Australia.Google ScholarGoogle Scholar
  76. Rebatto, D. 2005. Egee batch local ascii helper (blahp). In Proceedings of the HEPiX Meeting.Google ScholarGoogle Scholar
  77. Ricci, R., Oppenheimer, D. L., Lepreau, J., and Vahdat, A. 2006. Lessons from resource allocators for large-scale multiuser testbeds. Oper. Syst. Rev. 40, 1, 25--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Robert, Y. e. a. 2003. Grid'5000 plateforme de recherche experimentale en informatique. Tech. rep. Inria, July.Google ScholarGoogle Scholar
  79. Ruda, M. 2001. Integrating grid tools to build a computing resource broker: Activities of datagrid wp1. In Proceedings of the Conference on Computing in High Energy Physics (CHEP'01).Google ScholarGoogle Scholar
  80. Saiz, P., Aphecetcheb, L., BunImageiImagea, P., PiskaImaged, R., Revsbeche, J. E., and Imageegod, V. 2003. Alien - alice environment on the grid. Nucl. Instrum. Meth. A502, 437--440.Google ScholarGoogle ScholarCross RefCross Ref
  81. Sakane, E., Higashida, M., and Shimojo, S. 2009. An application of the NAREGI grid middleware to a nationwide joint-use environment for computing. In High Performance Computing on Vector Systems 2008, M. Resch, S. Roller, K. Benkert, M. Galle, W. Bez, H. Kobayashi, and T. Hirayama, Eds. Springer Berlin Heidelberg, 55--64.Google ScholarGoogle Scholar
  82. Savva, A., Anjomshoaa, A., Brisard, F., Drescher, M., Fellows, D., Ly, A., McGough, S., and Pulsipher, D. 2005. Job submission description language (jsdl) specification. http://forge.gridforum.org/projects/jsdl-wg. GFD-R.056.Google ScholarGoogle Scholar
  83. Sfiligoi, I. 2007. glideinwms - a generic pilot-based workload management system. In Proceedings of the Conference on Computing in High Energy Physics (CHEP'07).Google ScholarGoogle Scholar
  84. Sfiligoi, I., Bradley, D. C., Holzman, B., Mhashilkar, P., Padhi, S., and Wurthwein, F. 2009. The pilot way to grid resources using glideinwms. In Proceedings of the WRI World Congress on Computer Science and Information Engineering Vol. 02. IEEE Computer Society, Washington, DC, 428--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Sfiligoi, I., Quinn, G., Green, C., and Thain, G. 2008. Pilot job accounting and auditing in open science grid. In Proceedings of the 9th IEEE/ACM International Conference on Grid Computing (GRID'08). IEEE Computer Society, Washington, DC, 112--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Son, S. and Livny, M. 2003. Recovering internet symmetry in distributed computing. In Proceedings of the 3rd International Symposium on Cluster Computing and the Grid (CCGRID'03). IEEE Computer Society, Washington, DC, 542. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Streit, A., Bala, P., Beck-Ratzka, A., Benedyczak, K., Bergmann, S., Breu, R., Daivandy, J., Demuth, B., Eifer, A., Giesler, A., Hagemeier, B., Holl, S., Huber, V., Lamla, N., Mallmann, D., Memon, A., Memon, M., Rambadt, M., Riedel, M., Romberg, M., Schuller, B., Schlauch, T., Schreiber, A., Soddemann, T., and Ziegler, W. 2010. Unicore 6 recent and future advancements. Ann. Telecommun. 65, 757--762. 10.1007/s12243-010-0195-x.Google ScholarGoogle ScholarCross RefCross Ref
  88. Tan, W.-J., Ching, C. T. M., Camarasu-Pop, S., Calvat, P., and Glatard, T. 2010. Two experiments with application-level quality of service on the egee grid. In Proceedings of the 2nd Workshop on Grids Meets Autonomic Computing (GMAC'10). ACM, New York, NY, 11--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Terekhov, I. 2002. Meta-computing at d0. Nuclear Instruments and Methods in Physics Research (ACAT-02) 502/2-3, NIMA14225, 402--406.Google ScholarGoogle Scholar
  90. Thain, D., Tannenbaum, T., and Livny, M. 2002. Condor and the grid. In Grid Computing: Making the Global Infrastructure a Reality. John Wiley & Sons Inc, Hoboken, NJ.Google ScholarGoogle Scholar
  91. the GridPP Collaboration. 2006. Gridpp: Development of the U.K. computing grid for particle physics. J. Physics G: Nuclear Particle Physics 32, N1--N20.Google ScholarGoogle ScholarCross RefCross Ref
  92. Tsaregorodtsev, A., Garonne, V., and Stokes-Rees, I. 2004. Dirac: A scalable lightweight architecture for high throughput computing. In Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing (GRID'04). IEEE Computer Society, Washington, DC, 19--25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. van Herwijnen, E., Closier, J., Frank, M., Gaspar, C., Loverre, F., Ponce, S., Graciani Diaz, R., Galli, D., Marconi, U., Vagnoni, V., Brook, N., Buckley, A., Harrison, K., Schmelling, M., Egede, U., Tsaregorotsev, A., Garonne, V., Bogdanchikov, B., Korolko, I., Washbrook, A., Palacios, J. P., Klous, S., Saborido, J. J., Khan, A., Pickford, A., Soroko, A., Romanovski, V., Patrick, G., Kuznetsov, G., and Gandelman, M. 2003. Dirac - distributed infrastructure with remote agent control. In Proceedings of the Conference on Computing in High Energy Physics.Google ScholarGoogle Scholar
  94. Wenaus, T., Livny, M., and Würthwein, F. K. 2006. Preliminary plans for just-in-time workload management in the osg extensions program. Tech. rep., US Atlas. October. based on SAP proposal of March 2006.Google ScholarGoogle Scholar
  95. Xiao, L., Zhang, X., and Qu, Y. 2000. Effective load sharing on heterogeneous networks of workstations. In IPPS: 14th International Parallel Processing Symposium. IEEE Computer Society Press, Los Alamitos, 431--438. Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Zhou, S., Zheng, X., Wang, J., and Delisle, P. 1993. Utopia: A load sharing facility for large, heterogenous distributed computer systems. Softw. Pract. Exp. 23, 12, 1305--1336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Zvada, M., Benjamin, D., and Sfiligoi, I. 2010. Cdf glideinwms usage in grid computing of high energy physics. J. Physics: Conf. Series 219.Google ScholarGoogle Scholar

Index Terms

  1. A survey of task mapping on production grids

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Computing Surveys
            ACM Computing Surveys  Volume 45, Issue 3
            June 2013
            575 pages
            ISSN:0360-0300
            EISSN:1557-7341
            DOI:10.1145/2480741
            Issue’s Table of Contents

            Copyright © 2013 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 3 July 2013
            • Accepted: 1 April 2012
            • Revised: 1 April 2011
            • Received: 1 May 2010
            Published in csur Volume 45, Issue 3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader