- HOARE, C.A.R. Monitors: an operating system structuring concept. Comm. ACM., 17, p. 549 (1974). Google ScholarDigital Library
- BRINCH-HANSEN, P. The programming language Concurrent Pascal. IEEE Trans. on Software Engineering, 1, p. 199 (1975)Google Scholar
- BRINCH-HANSEN, P. Concurrent Pascal machine. Technical Report, Dept. of Information Science, California Institute of Technology (1975).Google Scholar
- BRINCH-HANSEN, P. The Solo operating system. Software: Practice & Experience, 6, p. 139 (1976)Google Scholar
- LISTER, A. M., & MAYNARD, K. J. An implementation of monitors. Software: Practice & Experience, 6, p. 377 (1976).Google ScholarCross Ref
- LISTER, A. M., & SAYER, P. J. Hierarchical monitors. Proc. 1976 2nternational Conference on Parallel Processing, pp 43249. (IEEE Cat. No. 76CH1127-OC (1976).Google Scholar
- HOWARD, J. H. Proving monitors. Comm. ACM, 19, p. 273 (1976) Google ScholarDigital Library
- KAUBISCH, W. H., PERROTT, R. H., & HOARE, C.A.R. Quasi-parallel programming. Software: Practice & Experience, 6, p. 341, (1976)Google ScholarCross Ref
Recommendations
Parallelizing tightly nested loops
IPPS '91: Proceedings of the Fifth International Parallel Processing SymposiumPresents a new technique to parallelize nested loops at the statement level. It transforms sequential nested loops, either vectorizable or not, into parallel ones. Previously, the wavefront method was used to parallelize non-vectorizable nested loops. ...
Software Pipelining of Nested Loops
CC '01: Proceedings of the 10th International Conference on Compiler ConstructionSoftware pipelining is a technique to improve the performance of a loop by overlapping the execution of several iterations. The execution of a software-pipelined loop goes through three phases: prolog, kernel, and epilog. Software pipelining works best ...
Transformations techniques for extracting parallelism in non-uniform nested loops
Executing a program in parallel machines needs not only to find sufficient parallelism in a program, but it is also important that we minimize the synchronization and communication overheads in the parallelized program. This yields to improve the ...
Comments