- in most cases a parallel-consistent order is required.
Even when the order is not important, it will generally require
fewer allocations to create a UPtrList of entries instead of a
HashTable or even a wordList.
- in various situations with mesh regions it is also useful to
filter out or remove the defaultRegion name (ie, "region0").
Can now do that conveniently from the polyMesh itself or as a static
function. Simply use this
const word& regionDir = polyMesh::regionName(regionName);
OR mesh.regionName()
instead of
const word& regionDir =
(
regionName != polyMesh::defaultRegion
? regionName
: word::null
);
Additionally, since the string '/' join operator filters out empty
strings, the following will work correctly:
(polyMesh::regionName(regionName)/polyMesh::meshSubDir)
(mesh.regionName()/polyMesh::meshSubDir)
- wrap command-line retrieval of fileName with an implicit validate.
Instead of this:
fileName input(args[1]);
fileName other(args["someopt"]);
Now use this:
auto input = args.get<fileName>(1);
auto other = args.get<fileName>("someopt");
which adds a fileName::validate on the inputs
Because of how it is implemented, it will automatically also apply
to argList getOrDefault<fileName>, readIfPresent<fileName> etc.
- adjust fileName::validate and clean to handle backslash conversion.
This makes it easier to ensure that path names arising from MS-Windows
are consistently handled internally.
- dictionarySearch: now check for initial '/' directly instead of
relying on fileName isAbsolute(), which now does more things
BREAKING: remove fileName::clean() const method
- relying on const/non-const to control the behaviour (inplace change
or return a copy) is too fragile and the const version was
almost never used.
Replace:
fileName sanitized = constPath.clean();
With:
fileName sanitized(constPath);
sanitized.clean());
STYLE: test empty() instead of comparing with fileName::null
- simplifies local toggling.
- centralize fileModification static variables into IOobject.
They were previously scattered between IOobject and regIOobject
- Favour use of argList methods that are more similar to dictionary
method names with the aim of reducing the cognitive load.
* Silently deprecate two-parameter get() method in favour of the
more familiar getOrDefault.
* Silently deprecate opt() method in favour of get()
These may be verbosely deprecated in future versions.
- this largely reverts 3f0f218d88 and 4ee65d12c4.
Consistent addressing with support for wrapped pointer types (eg,
autoPtr, std::unique_ptr) has proven to be less robust than desired.
Thus rescind HashTable iterator '->' dereferencing (from APR-2019).
- when windows portable executables (.exe or .dll) files are loaded,
their dependent libraries not fully loaded. For OpenFOAM this means
that the static constructors which are responsible for populating
run-time selection tables are not triggered, and most of the run-time
selectable models will simply not be available.
Possible Solution
=================
Avoid this problem by defining an additional library symbol such as
the following:
extern "C" void libName_Load() {}
in the respective library, and tag this symbol as 'unresolved' for
the linker so that it will attempt to resolve it at run-time by
loading the known libraries until it finds it. The link line would
resemble the following:
-L/some/path -llibName -ulibName_Load
Pros:
- Allows precise control of forced library loading
Cons:
- Moderately verbose adjustment of some source files (even with macro
wrapping for the declaration).
- Adjustment of numerous Make/options files and somewhat ad hoc
in nature.
- Requires additional care when implementing future libraries and/or
applications.
- This is the solution taken by the symscape patches (Richard Smith)
Possible Solution
=================
Avoid this problem by simply force loading all linked libraries.
This is done by "scraping" the information out of the respective
Make/options file (after pre-processing) and using that to define
the library list that will be passed to Foam::dlOpen() at run-time.
Pros:
- One-time (very) minimal adjustment of the sources and wmake toolchain
- Automatically applies to future applications
Cons:
- Possibly larger memory footprint of application (since all dependent
libraries are loaded).
- Possible impact on startup time (while loading libraries)
- More sensitive to build failures. Since the options files are
read and modified based on the existence of the dependent
libraries as a preprocessor step, if the libraries are initially
unavailable for the first attempt at building the application,
the dependencies will be inaccurate for later (successful) builds.
- This is solution taken by the bluecape patches (Bruno Santos)
Adopted Solution
================
The approach taken by Bruno was adopted in a modified form since
this appears to be the most easily maintained.
Additional Notes
================
It is always possible to solve this problem by defining a corresponding
'libs (...)' entry in the case system/controlDict, which forces a dlOpen
of the listed libraries. This is obviously less than ideal for large-scale
changes, but can work to resolve an individual problem.
The peldd utility (https://github.com/gsauthof/pe-util), which is
also packaged as part of MXE could provide yet another alternative.
Like ldd it can be used to determine the library dependencies of
binaries or libraries. This information could be used to define an
additional load layer for Windows.
- Eg, with surface writers now in surfMesh, there are fewer libraries
depending on conversion and sampling.
COMP: regularize linkage ordering and avoid some implicit linkage (#1238)
- instead of dict.lookup(name) >> val;
can use dict.readEntry(name, val);
for checking of input token sizes.
This helps catch certain types of input errors:
{
key1 ; // <- Missing value
key2 1234 // <- Missing ';' terminator
key3 val;
}
STYLE: readIfPresent() instead of 'if found ...' in a few more places.
- centralizes IOobject handling and treatment of alternative locations.
If an alternative file location is specified, it will be used instead.
- provide decompositionMethod::canonicalName instead of using
"decomposeParDict" in various places.
General:
* -roots, -hostRoots, -fileHandler
Specific:
* -to <coordinateSystem> -from <coordinateSystem>
- Display -help-compat when compatibility or ignored options are available
STYLE: capitalization of options text
- simplifies usage.
Support syncPar check on names() to detect inconsistencies.
- simplify readFields, ReadFields and other routines by using these
new methods.
- use succincter method names that more closely resemble dictionary
and HashTable method names. This improves method name consistency
between classes and also requires less typing effort:
args.found(optName) vs. args.optionFound(optName)
args.readIfPresent(..) vs. args.optionReadIfPresent(..)
...
args.opt<scalar>(optName) vs. args.optionRead<scalar>(optName)
args.read<scalar>(index) vs. args.argRead<scalar>(index)
- the older method names forms have been retained for code compatibility,
but are now deprecated
Within decomposeParDict, it is now possible to specify a different
decomposition method, methods coefficients or number of subdomains
for each region individually.
The top-level numberOfSubdomains remains mandatory, since this
specifies the number of domains for the entire simulation.
The individual regions may use the same number or fewer domains.
Any optional method coefficients can be specified in a general
"coeffs" entry or a method-specific one, eg "metisCoeffs".
For multiLevel, only the method-specific "multiLevelCoeffs" dictionary
is used, and is also mandatory.
----
ENH: shortcut specification for multiLevel.
In addition to the longer dictionary form, it is also possible to
use a shorter notation for multiLevel decomposition when the same
decomposition method applies to each level.
old "positions" file form
The change to barycentric-based tracking changed the contents of the
cloud "positions" file to a new format comprising the barycentric
co-ordinates and other cell position-based info. This broke
backwards compatibility, providing no option to restart old cases
(v1706 and earlier), and caused difficulties for dependent code, e.g.
for post-processing utilities that could only infer the contents only
after reading.
The barycentric position info is now written to a file called
"coordinates" with provision to restart old cases for which only the
"positions" file is available. Related utilities, e.g. for parallel
running and data conversion have been updated to be able to support both
file types.
To write the "positions" file by default, use set the following option
in the InfoSwitches section of the controlDict:
writeLagrangianPositions 1;
Tracking data classes are no longer templated on the derived cloud type.
The advantage of this is that they can now be passed to sub models. This
should allow continuous phase data to be removed from the parcel
classes. The disadvantage is that every function which once took a
templated TrackData argument now needs an additional TrackCloudType
argument in order to perform the necessary down-casting.
Original commit message:
------------------------
Parallel IO: New collated file format
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
terms of the local barycentric coordinates of the current tetrahedron,
rather than the global coordinate system.
Barycentric tracking works on any mesh, irrespective of mesh quality.
Particles do not get "lost", and tracking does not require ad-hoc
"corrections" or "rescues" to function robustly, because the calculation
of particle-face intersections is unambiguous and reproducible, even at
small angles of incidence.
Each particle position is defined by topology (i.e. the decomposed tet
cell it is in) and geometry (i.e. where it is in the cell). No search
operations are needed on restart or reconstruct, unlike when particle
positions are stored in the global coordinate system.
The particle positions file now contains particles' local coordinates
and topology, rather than the global coordinates and cell. This change
to the output format is not backwards compatible. Existing cases with
Lagrangian data will not restart, but they will still run from time
zero without any modification. This change was necessary in order to
guarantee that the loaded particle is valid, and therefore
fundamentally prevent "loss" and "search-failure" type bugs (e.g.,
2517, 2442, 2286, 1836, 1461, 1341, 1097).
The tracking functions have also been converted to function in terms
of displacement, rather than end position. This helps remove floating
point error issues, particularly towards the end of a tracking step.
Wall bounded streamlines have been removed. The implementation proved
incompatible with the new tracking algorithm. ParaView has a surface
LIC plugin which provides equivalent, or better, functionality.
Additionally, bug report <https://bugs.openfoam.org/view.php?id=2517>
is resolved by this change.
- there was a slight mix of MUST_READ and MUST_READ_IF_MODIFIED
but with no obvious code to handle runtime modified values
of the decomposition, or how this works with alternative
dictionaries.
- Cleanup/centralize handling of -decomposeParDict by relocating
common code into argList. Ensures that all processes receive
identical information about the -decomposeParDict opton.
- Only use alternative decomposeParDict for simpleFoam/motorBike
tutorial so that this will be included in the test loop for snappy.
- Added Mattijs' fix for surfaceRedistributePar.
Moved file path handling to regIOobject and made it type specific so
now every object can have its own rules. Examples:
- faceZones are now processor local (and don't search up anymore)
- timeStampMaster is now no longer hardcoded inside IOdictionary
(e.g. uniformDimensionedFields support it as well)
- the distributedTriSurfaceMesh is properly processor-local; no need
for fileModificationChecking manipulation.