Although Unified Parallel C and OpenMP are being proposed for supporting more efficiently multicore architectures, the fact is that MPI is still used as a useful model on shared memory machines. Traditional mainstream MPI implementations as MPICH2 and OpenMPI build each MPI task as a process, an approach that presents some disadvantages in shared memory because message passing between processes usually requires two copies. One-copy communication can be achieved with operating system support, through kernel modules like KNEM or Limic, techniques like SMARTMAP, or special system calls like
, with disadvantages mainly in portability and limited improvements in performance. It is also possible building each MPI task as a thread. This is not a new concept. Implementations such as TOMPI, TMPI, or the newer MPI Actor or MPC-MPI run an MPI node as a thread, each one stressing different goals. AzequiaMPI is a thread-based but still a full conformant open source implementation of the MPI-1.3 standard. AzequiaMPI shows that MPI performance can be significatively improved by fully exploiting a single shared address space.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten