- generalize some of the library extensions (.so vs .dylib).
Provide as wmake 'sysFunctions'
- added note about unsupported/incomplete system support
- centralize detection of ThirdParty packages into wmake/ subdirectory
by providing a series of scripts in the spirit of GNU autoconfig.
For example,
have_boost, have_readline, have_scotch, ...
Each of the `have_<package>` scripts will generally provide the
following type of functions:
have_<package> # detection
no_<package> # reset
echo_<package> # echoing
and the following type of variables:
HAVE_<package> # unset or 'true'
<package>_ARCH_PATH # root for <package>
<package>_INC_DIR # include directory for <package>
<package>_LIB_DIR # library directory for <package>
This simplifies the calling scripts:
if have_metis
then
wmake metisDecomp
fi
As well as reducing clutter in the corresponding Make/options:
EXE_INC = \
-I$(METIS_INC_DIR) \
-I../decompositionMethods/lnInclude
LIB_LIBS = \
-L$(METIS_LIB_DIR) -lmetis
Any additional modifications (platform-specific or for an external build
system) can now be made centrally.
- the API-versioned calls (eg, tecini142, teczne142, tecpoly142, tecend142),
the limited availability of the SDK and lack of adequate testing make
proper maintenance very difficult.
- both autoPtr and tmp are defined with an implicit construct from
nullptr (but with explicit construct from a pointer to null).
Thus is it safe to use 'nullptr' when returning an empty autoPtr or tmp.
- when constructing dimensioned fields that are to be zero-initialized,
it is preferrable to use a form such as
dimensionedScalar(dims, Zero)
dimensionedVector(dims, Zero)
rather than
dimensionedScalar("0", dims, 0)
dimensionedVector("zero", dims, vector::zero)
This reduces clutter and also avoids any suggestion that the name of
the dimensioned quantity has any influence on the field's name.
An even shorter version is possible. Eg,
dimensionedScalar(dims)
but reduces the clarity of meaning.
- NB: UniformDimensionedField is an exception to these style changes
since it does use the name of the dimensioned type (instead of the
regIOobject).
This class is largely a pre-C++11 holdover. It is now possible to
simply use move construct/assignment directly.
In a few rare cases (eg, polyMesh::resetPrimitives) it has been
replaced by an autoPtr.
Improve alignment of its behaviour with std::unique_ptr
- element_type typedef
- release() method - identical to ptr() method
- get() method to get the pointer without checking and without releasing it.
- operator*() for dereferencing
Method name changes
- renamed rawPtr() to get()
- renamed rawRef() to ref(), removed unused const version.
Removed methods/operators
- assignment from a raw pointer was deleted (was rarely used).
Can be convenient, but uncontrolled and potentially unsafe.
Do allow assignment from a literal nullptr though, since this
can never leak (and also corresponds to the unique_ptr API).
Additional methods
- clone() method: forwards to the clone() method of the underlying
data object with argument forwarding.
- reset(autoPtr&&) as an alternative to operator=(autoPtr&&)
STYLE: avoid implicit conversion from autoPtr to object type in many places
- existing implementation has the following:
operator const T&() const { return operator*(); }
which means that the following code works:
autoPtr<mapPolyMesh> map = ...;
updateMesh(*map); // OK: explicit dereferencing
updateMesh(map()); // OK: explicit dereferencing
updateMesh(map); // OK: implicit dereferencing
for clarity it may preferable to avoid the implicit dereferencing
- prefer operator* to operator() when deferenced a return value
so it is clearer that a pointer is involve and not a function call
etc Eg, return *meshPtr_; vs. return meshPtr_();
- relocated HashSetPlusEqOp and HashTablePlusEqOp to
HashSetOps::plusEqOp and HashTableOps::plusEqOp, respectively
- additional functions for converting between a labelHashSet
and a PackedBoolList or List<bool>:
From lists selections to labelHashSet indices:
HashSetOps::used(const PackedBoolList&);
HashSetOps::used(const UList<bool>&);
From labelHashSet to list forms:
PackedBoolList bitset(const labelHashSet&);
List<bool> bools(const labelHashSet&);
- this currently just strips off the leading parent directory name
"/this/path/and/subdirs/name"
relative("/this/path") -> "and/subdirs/name"
relative("/this") -> "path/and/subdirs/name"
- problems when the cloud was not available on all processors.
- NB: ensight measured data only allows a single cloud, but
foamToEnsight writes all clouds.
- use succincter method names that more closely resemble dictionary
and HashTable method names. This improves method name consistency
between classes and also requires less typing effort:
args.found(optName) vs. args.optionFound(optName)
args.readIfPresent(..) vs. args.optionReadIfPresent(..)
...
args.opt<scalar>(optName) vs. args.optionRead<scalar>(optName)
args.read<scalar>(index) vs. args.argRead<scalar>(index)
- the older method names forms have been retained for code compatibility,
but are now deprecated
- include amount of free system memory in profiling, which can give an
indication of when swapping is about to start
- profilingSummary utility to collect profiling from parallel
calculations. Collects profiling information from processor
directories and summarize the time spent and number of calls as (max
avg min) values.
- this provides a better typesafe means of locating predefined cell
models than relying on strings. The lookup is now ptr() or ref()
directly. The lookup functions behave like on-demand singletons when
loading "etc/cellModels".
Functionality is now located entirely in cellModel but a forwarding
version of cellModeller is provided for API (but not ABI) compatibility
with older existing user code.
STYLE: use constexpr for cellMatcher constants
old "positions" file form
The change to barycentric-based tracking changed the contents of the
cloud "positions" file to a new format comprising the barycentric
co-ordinates and other cell position-based info. This broke
backwards compatibility, providing no option to restart old cases
(v1706 and earlier), and caused difficulties for dependent code, e.g.
for post-processing utilities that could only infer the contents only
after reading.
The barycentric position info is now written to a file called
"coordinates" with provision to restart old cases for which only the
"positions" file is available. Related utilities, e.g. for parallel
running and data conversion have been updated to be able to support both
file types.
To write the "positions" file by default, use set the following option
in the InfoSwitches section of the controlDict:
writeLagrangianPositions 1;
Original commit message:
------------------------
Parallel IO: New collated file format
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
- previous only checked for clouds at the last instance and only
detected lagrangian fields from the first cloud.
Now check for clouds at all instances and detect all of their fields
as well.
- error::throwExceptions(bool) returning the previous state makes it
easier to set and restore states.
- throwing() method to query the current handling (if required).
- the normal error::throwExceptions() and error::dontThrowExceptions()
also return the previous state, to make it easier to restore later.
- use allocator class to wrap the stream pointers instead of passing
them into ISstream, OSstream and using a dynamic cast to delete
then. This is especially important if we will have a bidirectional
stream (can't delete twice!).
STYLE:
- file stream constructors with std::string (C++11)
- for rewind, explicit about in|out direction. This is not currently
important, but avoids surprises with any future bidirectional access.
- combined string streams in StringStream.H header.
Similar to <sstream> include that has both input and output string
streams.
- added an explicit print, but only report profiling to the log
file from master process.
We don't wish to overwrite any profiling that was conducted during
the simulation. Besides which, we don't have a proper Time object
for handling the write nicely either.