I'm looking at several examples from PETSc and petsc4py and looking at the PDF user manual of PETSc. The manual states:
For those not familiar with MPI, acommunicatoris a way of indicating a collection of processes that will be involved together in a calculation or communication. Communicators have the variable type
MPI_Comm
. In most cases users can employ the communicatorPETSC_COMM_WORLD
to indicate all processes in a given run andPETSC_COMM_SELF
to indicate a single process.
I believe I understand that statement, but I'm unsure of the real consequences of actually using these communicators are. I'm unsure of what really happens when you do TSCreate(PETSC_COMM_WORLD,...)
vs TSCreate(PETSC_COMM_SELF,...)
or likewise for a distributed array. If you created a DMDA
with PETSC_COMM_SELF
, does this maybe mean that the DM
object won't really be distributed across multiple processes? Or if you create a TS
with PETSC_COMM_SELF
and a DM
with PETSC_COMM_WORLD
, does this mean the solver can't actually access ghost nodes? Does it effect the results of DMCreateLocalVector
and DMCreateGlobalVector
?
The communicator for a solver decides which processes participate in the solver operations. For example, a TS with PETSC_COMM_SELF would run independently on each process, whereas one with PETSC_COMM_WORLD would evolve a single system across all processes. If you are using a DM with the solver, the communicators must be congruent.