I noticed that when I have a deadlocked MPI program, e.g. wait.c
#include <stdio.h>
#include <mpi.h>
int main(int argc, char * argv[])
{
int taskID = -1;
int NTasks = -1;
int a = 11;
int b = 22;
MPI_Status Stat;
/* MPI Initializations */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &taskID);
MPI_Comm_size(MPI_COMM_WORLD, &NTasks);
if(taskID == 0)
MPI_Send(&a, 1, MPI_INT, 1, 66, MPI_COMM_WORLD);
else //if(taskID == 1)
MPI_Recv(&b, 1, MPI_INT, 0, 66, MPI_COMM_WORLD, &Stat);
printf("Task %i : a: %i b: %i\n", taskID, a, b);
MPI_Finalize();
return 0;
}
When I compile wait.c
with the mvapich2-2.1 library (which itself was compiled using gcc-4.9.2) and run it (e.g. mpirun -np 4 ./a.out
) I notice (via top
), that all 4 processors are chugging along at 100%.
When I compile wait.c
with the openmpi-1.6 library (which itself was compiled using gcc-4.9.2) and run it (e.g. mpirun -np 4 ./a.out
), I notice (via top
), that 2 processors are chugging at 100% and 2 at 0%.
Presumably the 2 at 0% are the ones that completed communication.
QUESTION : Why is there a difference in CPU usage between openmpi and mvapich2? Is this the expected behavior? When the CPU usage is 100%, is that from constantly checking to see if a message is being sent?
Both implementations busy-wait on
MPI_Recv()
in order to minimize latencies. This explains why ranks 2 and 3 are at 100% with either of the two MPI implementations.Now, clearly ranks 0 and 1 progress to the
MPI_Finalize()
call and this is where the two implementations differ: mvapich2 busy-wait while openmpi does not.To answer your question: yes, they are at 100% while checking whether a message has been received and it is expected behaviour.
If you are not on InfiniBand, you can observe this by attaching a
strace
to one of the processes: you should see a number of poll() invocations there.