- use default initialize boundBox instead of invertedBox
- reset() instead of assigning from invertedBox
- extend (three parameter version) and grow method
- inflate(Random) instead of extend + re-assigning
- bundles frequently used 'gather/scatter' patterns more consistently.
- combineAllGather -> combineGather + broadcast
- listCombineAllGather -> listCombineGather + broadcast
- mapCombineAllGather -> mapCombineGather + broadcast
- allGatherList -> gatherList + scatterList
- reduce -> gather + broadcast (ie, allreduce)
- The allGatherList currently wraps gatherList/scatterList, but may be
replaced with a different algorithm in the future.
STYLE: PstreamCombineReduceOps.H is mostly unneeded now
- the very old 'writer' class was fully stateless and always templated
on an particular output type.
This is now replaced with a 'coordSetWriter' with similar concepts
as previously introduced for surface writers (#1206).
- writers change from being a generic state-less set of routines to
more properly conforming to the normal notion of a writer.
- Parallel data is done *outside* of the writers, since they are used
in a wide variety of contexts and the caller is currently still in
a better position for deciding how to combine parallel data.
ENH: update sampleSets to sample on per-field basis (#2347)
- sample/write a field in a single step.
- support for 'sampleOnExecute' to obtain values at execution
intervals without writing.
- support 'sets' input as a dictionary entry (as well as a list),
which is similar to the changes for sampled-surface and permits use
of changeDictionary to modify content.
- globalIndex for gather to reduce parallel communication, less code
- qualify the sampleSet results (properties) with the name of the set.
The sample results were previously without a qualifier, which meant
that only the last property value was actually saved (previous ones
overwritten).
For example,
```
sample1
{
scalar
{
average(line,T) 349.96521;
min(line,T) 349.9544281;
max(line,T) 350;
average(cells,T) 349.9854619;
min(cells,T) 349.6589286;
max(cells,T) 350.4967271;
average(line,epsilon) 0.04947733869;
min(line,epsilon) 0.04449639927;
max(line,epsilon) 0.06452856475;
}
label
{
size(line,T) 79;
size(cells,T) 1720;
size(line,epsilon) 79;
}
}
```
ENH: update particleTracks application
- use globalIndex to manage original parcel addressing and
for gathering. Simplify code by introducing a helper class,
storing intermediate fields in hash tables instead of
separate lists.
ADDITIONAL NOTES:
- the regionSizeDistribution largely retains separate writers since
the utility of placing sum/dev/count for all fields into a single file
is questionable.
- the streamline writing remains a "soft" upgrade, which means that
scalar and vector fields are still collected a priori and not
on-the-fly. This is due to how the streamline infrastructure is
currently handled (should be upgraded in the future).
- The writers have changed from being a generic state-less set of
routines to more properly conforming to the normal notion of a writer.
These changes allow us to combine output fields (eg, in a single
VTK/vtp file for each timestep).
Parallel data reduction and any associated bookkeeping is now part
of the surface writers.
This improves their re-usability and avoids unnecessary
and premature data reduction at the sampling stage.
It is now possible to have different output formats on a per-surface
basis.
- A new feature of the surface sampling is the ability to "store" the
sampled surfaces and fields onto a registry for reuse by other
function objects.
Additionally, the "store" can be triggered at the execution phase
as well
- makes the intent clearer and avoids the need for additional
constructor casting. Eg,
labelList(10, Zero) vs. labelList(10, 0)
scalarField(10, Zero) vs. scalarField(10, scalar(0))
vectorField(10, Zero) vs. vectorField(10, vector::zero)
- The bitSet class replaces the old PackedBoolList class.
The redesign provides better block-wise access and reduced method
calls. This helps both in cases where the bitSet may be relatively
sparse, and in cases where advantage of contiguous operations can be
made. This makes it easier to work with a bitSet as top-level object.
In addition to the previously available count() method to determine
if a bitSet is being used, now have simpler queries:
- all() - true if all bits in the addressable range are empty
- any() - true if any bits are set at all.
- none() - true if no bits are set.
These are faster than count() and allow early termination.
The new test() method tests the value of a single bit position and
returns a bool without any ambiguity caused by the return type
(like the get() method), nor the const/non-const access (like
operator[] has). The name corresponds to what std::bitset uses.
The new find_first(), find_last(), find_next() methods provide a faster
means of searching for bits that are set.
This can be especially useful when using a bitSet to control an
conditional:
OLD (with macro):
forAll(selected, celli)
{
if (selected[celli])
{
sumVol += mesh_.cellVolumes()[celli];
}
}
NEW (with const_iterator):
for (const label celli : selected)
{
sumVol += mesh_.cellVolumes()[celli];
}
or manually
for
(
label celli = selected.find_first();
celli != -1;
celli = selected.find_next()
)
{
sumVol += mesh_.cellVolumes()[celli];
}
- When marking up contiguous parts of a bitset, an interval can be
represented more efficiently as a labelRange of start/size.
For example,
OLD:
if (isA<processorPolyPatch>(pp))
{
forAll(pp, i)
{
ignoreFaces.set(i);
}
}
NEW:
if (isA<processorPolyPatch>(pp))
{
ignoreFaces.set(pp.range());
}
Improve alignment of its behaviour with std::unique_ptr
- element_type typedef
- release() method - identical to ptr() method
- get() method to get the pointer without checking and without releasing it.
- operator*() for dereferencing
Method name changes
- renamed rawPtr() to get()
- renamed rawRef() to ref(), removed unused const version.
Removed methods/operators
- assignment from a raw pointer was deleted (was rarely used).
Can be convenient, but uncontrolled and potentially unsafe.
Do allow assignment from a literal nullptr though, since this
can never leak (and also corresponds to the unique_ptr API).
Additional methods
- clone() method: forwards to the clone() method of the underlying
data object with argument forwarding.
- reset(autoPtr&&) as an alternative to operator=(autoPtr&&)
STYLE: avoid implicit conversion from autoPtr to object type in many places
- existing implementation has the following:
operator const T&() const { return operator*(); }
which means that the following code works:
autoPtr<mapPolyMesh> map = ...;
updateMesh(*map); // OK: explicit dereferencing
updateMesh(map()); // OK: explicit dereferencing
updateMesh(map); // OK: implicit dereferencing
for clarity it may preferable to avoid the implicit dereferencing
- prefer operator* to operator() when deferenced a return value
so it is clearer that a pointer is involve and not a function call
etc Eg, return *meshPtr_; vs. return meshPtr_();
- Constructor for bounding box of a single point.
- add(boundBox), add(point) ...
-> Extend box to enclose the second box or point(s).
Eg,
bb.add(pt);
vs.
bb.min() = Foam::min(bb.min(), pt);
bb.max() = Foam::max(bb.max(), pt);
Also works with other bounding boxes.
Eg,
bb.add(bb2);
// OR
bb += bb2;
vs.
bb.min() = Foam::min(bb.min(), bb2.min());
bb.max() = Foam::max(bb.max(), bb2.max());
'+=' operator allows the reduction to be used in parallel
gather/scatter operations.
A global '+' operator is not currently needed.
Note: may be useful in the future to have a 'clear()' method
that resets to a zero-sized (inverted) box.
STYLE: make many bounding box constructors explicit
reduce()
- parallel reduction of min/max values.
Reduces coding for the callers.
Eg,
bb.reduce();
instead of the previous method:
reduce(bb.min(), minOp<point>());
reduce(bb.max(), maxOp<point>());
STYLE:
- use initializer list for creating static content
- use point::min/point::max when defining standard boxes
- the checking for point-connected multiple-regions now also writes the
conflicting points to a pointSet
- with the -writeSets option it now also reconstructs & writes pointSets
In parallel the sets are reconstructed. e.g.
mpirun -np 6 checkMesh -parallel -allGeometry -allTopology -writeSets vtk
will create a postProcessing/ folder with the vtk files of the
(reconstructed) faceSets and cellSets.
Also improved analysis of disconnected regions now also checks for point
connectivity with is useful for detecting if AMI regions have duplicate
points.
Patch contributed by Mattijs Janssens
- checkMesh has option to write faceSets or (outside of) cellSets as
sampledSurface format. It automatically reconstructs the set on the master
and writes it to the postProcessing folder (as any sampledSurface). E.g.
mpirun -np 6 checkMesh -allTopology -allGeometry -writeSets vtk -parallel
- fixed order writing of symmTensor in Ensight writers