- UPstream::Communicator is similar to UPstream::Request to
wrap/unwrap MPI_Comm. Provides a 'lookup' method to transcribe
the internal OpenFOAM communicator tracking to the opaque wrapped
version.
- provide an 'openfoam_mpi.H' interfacing file, which includes
the <mpi.h> as well as casting routines.
Example (caution: ugly!)
MPI_Comm myComm =
PstreamUtils::Cast::to_mpi
(
UPstream::Communicator::lookup(UPstream::worldComm)
);
- MPI_THREAD_MULTIPLE is usually undesirable for performance reasons,
but in some cases may be necessary if a linked library expects it.
Provide a '-mpi-threads' option to explicitly request it.
ENH: consolidate some looping logic within argList
- permits distinction between communicators/groups that were
user-created (eg, MPI_Comm_create) versus those queried from MPI.
Previously simply relied on non-null values, but that is too fragile
ENH: support List<Request> version of UPstream::finishedRequests
- allows more independent algorithms
ENH: added UPstream::probeMessage(...). Blocking or non-blocking
- previously built the entire adjacency table (full communication!)
but this is only strictly needed when using 'scheduled' as the
default communication mode. For blocking/nonBlocking modes this
information is not necessary at that point.
The processorTopology::New now generally creates a smaller amount of
data at startup: the processor->patch mapping and the patchSchedule.
If the default communication mode is 'scheduled', the behaviour is
almost identical to previously.
- Use Map<label> for the processor->patch mapping for a smaller memory
footprint on large (ie, sparsely connected) cases. It also
simplifies coding and allows recovery of the list of procNeighbours
on demand.
- Setup the processor initEvaluate/evaluate states with fewer loops
over the patches.
========
BREAKING: procNeighbours() method changed definition
- this was previously the entire adjacency table, but is now only the
processor-local neighbours. Now use procAdjacency() to create or
recover the entire adjacency table.
The only known use is within Cloud<ParticleType>::move and there it
was only used to obtain processor-local information.
Old:
const labelList& neighbourProcs =
mesh.globalData().topology().procNeighbours()[Pstream::myProcNo()];
New:
const labelList& neighbourProcs =
mesh.globalData().topology().procNeighbours();
// If needed, the old definition (with communication!)
const labelListList& connectivity =
mesh.globalData().topology().procAdjacency();
- relocate templating to factory method 'New'.
Adds provisions for more general re-use.
- expose processor topology in globalMesh as topology()
- wrap proc->patch lookup as processorTopology::procPatchLookup method
(failsafe). May consider using Map<label> for its storage in the
future.
- PstreamBuffers nProcs() and allProcs() methods to recover the rank
information consistent with the communicator used for construction
- allowClearRecv() methods for more control over buffer reuse
For example,
pBufs.allowClearRecv(false);
forAll(particles, particlei)
{
pBufs.clear();
fill...
read via IPstream(..., pBufs);
}
This preserves the receive buffers memory allocation between calls.
- finishedNeighbourSends() method as compact wrapper for
finishedSends() when send/recv ranks are identically
(eg, neighbours)
- hasSendData()/hasRecvData() methods for PstreamBuffers.
Can be useful for some situations to skip reading entirely.
For example,
pBufs.finishedNeighbourSends(neighProcs);
if (!returnReduce(pBufs.hasRecvData(), orOp<bool>()))
{
// Nothing to do
continue;
}
...
On an individual basis:
for (const int proci : pBufs.allProcs())
{
if (pBufs.hasRecvData(proci))
{
...
}
}
Also conceivable to do the following instead (nonBlocking only):
if (!returnReduce(pBufs.hasSendData(), orOp<bool>()))
{
// Nothing to do
pBufs.clear();
continue;
}
pBufs.finishedNeighbourSends(neighProcs);
...