- aids with detection of excess tokens (issue #762)
- deprecated dictionary::operator[] in favour of the lookup() method
which offers more flexibilty and clarity of purpose.
Additionally, the read<> and get<> forms should generally be used
instead anyhow.
- improves backward compatibility and more naming consistency.
Retain setMany(iter1, iter2) to avoid ambiguity with the
PackedList::set(index, value) method.
- The iterator for a HashSet dereferences directly to its key.
- Eg,
for (const label patchi : patchSet)
{
...
}
vs.
forAllConstIter(labelHashSet, patchSet, iter)
{
const label patchi = iter.key();
...
}
- controlled by the the 'printExecutionFormat' InfoSwitch in
etc/controlDict
// Style for "ExecutionTime = " output
// - 0 = seconds (with trailing 's')
// - 1 = day-hh:mm:ss
ExecutionTime = 112135.2 s ClockTime = 113017 s
ExecutionTime = 1-07:08:55.20 ClockTime = 1-07:23:37
- Callable via the new Time::printExecutionTime() method,
which also helps to reduce clutter in the applications.
Eg,
runTime.printExecutionTime(Info);
vs
Info<< "ExecutionTime = " << runTime.elapsedCpuTime() << " s"
<< " ClockTime = " << runTime.elapsedClockTime() << " s"
<< nl << endl;
--
ENH: return elapsedClockTime() and clockTimeIncrement as double
- previously returned as time_t, which is less portable.
- generalize some of the library extensions (.so vs .dylib).
Provide as wmake 'sysFunctions'
- added note about unsupported/incomplete system support
- centralize detection of ThirdParty packages into wmake/ subdirectory
by providing a series of scripts in the spirit of GNU autoconfig.
For example,
have_boost, have_readline, have_scotch, ...
Each of the `have_<package>` scripts will generally provide the
following type of functions:
have_<package> # detection
no_<package> # reset
echo_<package> # echoing
and the following type of variables:
HAVE_<package> # unset or 'true'
<package>_ARCH_PATH # root for <package>
<package>_INC_DIR # include directory for <package>
<package>_LIB_DIR # library directory for <package>
This simplifies the calling scripts:
if have_metis
then
wmake metisDecomp
fi
As well as reducing clutter in the corresponding Make/options:
EXE_INC = \
-I$(METIS_INC_DIR) \
-I../decompositionMethods/lnInclude
LIB_LIBS = \
-L$(METIS_LIB_DIR) -lmetis
Any additional modifications (platform-specific or for an external build
system) can now be made centrally.
- IOstreamOption class to encapsulate format, compression, version.
This is ordered to avoid internal padding in the structure, which
reduces several bytes of memory overhead for stream objects
and other things using this combination of data.
Byte-sizes:
old IOstream:48 PstreamBuffers:88 Time:928
new IOstream:24 PstreamBuffers:72 Time:904
====
STYLE: remove support for deprecated uncompressed/compressed selectors
In older versions, the system/controlDict used these types of
specifications:
writeCompression uncompressed;
writeCompression compressed;
As of DEC-2009, these were deprecated in favour of using normal switch
names:
writeCompression true;
writeCompression false;
writeCompression on;
writeCompression off;
Now removed these deprecated names and treat like any other unknown
input and issue a warning. Eg,
Unknown compression specifier 'compressed', assuming no compression
====
STYLE: provide Enum of stream format names (ascii, binary)
====
COMP: fixed incorrect IFstream construct in FIREMeshReader
- spurious bool argument (presumably meant as uncompressed) was being
implicitly converted to a versionNumber. Now caught by making
IOstreamOption::versionNumber constructor explicit.
- bad version specifier in changeDictionary
- the expansions were previously required as slash to follow, but
now either are possible.
"<case>", "<case>/" both yield the same as "$FOAM_CASE" and
will not have a trailing slash in the result. The expansion of
"$FOAM_CASE/" will however have a trailing slash.
- adjust additional files using these expansions
- when constructing dimensioned fields that are to be zero-initialized,
it is preferrable to use a form such as
dimensionedScalar(dims, Zero)
dimensionedVector(dims, Zero)
rather than
dimensionedScalar("0", dims, 0)
dimensionedVector("zero", dims, vector::zero)
This reduces clutter and also avoids any suggestion that the name of
the dimensioned quantity has any influence on the field's name.
An even shorter version is possible. Eg,
dimensionedScalar(dims)
but reduces the clarity of meaning.
- NB: UniformDimensionedField is an exception to these style changes
since it does use the name of the dimensioned type (instead of the
regIOobject).
- in many cases can just use lookupOrDefault("key", bool) instead of
lookupOrDefault<bool> or lookupOrDefault<Switch> since reading a
bool from an Istream uses the Switch(Istream&) anyhow
STYLE: relocated Switch string names into file-local scope
This class is largely a pre-C++11 holdover. It is now possible to
simply use move construct/assignment directly.
In a few rare cases (eg, polyMesh::resetPrimitives) it has been
replaced by an autoPtr.
* For most cases, this conversion would be largely unintentional
and also less efficient. If the regex is desirable, the caller
should invoke it explicitly.
For example,
findStrings(regExp(str), listOfStrings);
Or use one of the keyType, wordRe, wordRes variants instead.
If string is to be used as a plain (non-regex) matcher,
this can be directly invoked
findMatchingStrings(str, listOfStrings);
or using the ListOps instead:
findIndices(listOfStrings, str);
* provide function interfaces for keyType.
- use succincter method names that more closely resemble dictionary
and HashTable method names. This improves method name consistency
between classes and also requires less typing effort:
args.found(optName) vs. args.optionFound(optName)
args.readIfPresent(..) vs. args.optionReadIfPresent(..)
...
args.opt<scalar>(optName) vs. args.optionRead<scalar>(optName)
args.read<scalar>(index) vs. args.argRead<scalar>(index)
- the older method names forms have been retained for code compatibility,
but are now deprecated
Within decomposeParDict, it is now possible to specify a different
decomposition method, methods coefficients or number of subdomains
for each region individually.
The top-level numberOfSubdomains remains mandatory, since this
specifies the number of domains for the entire simulation.
The individual regions may use the same number or fewer domains.
Any optional method coefficients can be specified in a general
"coeffs" entry or a method-specific one, eg "metisCoeffs".
For multiLevel, only the method-specific "multiLevelCoeffs" dictionary
is used, and is also mandatory.
----
ENH: shortcut specification for multiLevel.
In addition to the longer dictionary form, it is also possible to
use a shorter notation for multiLevel decomposition when the same
decomposition method applies to each level.
old "positions" file form
The change to barycentric-based tracking changed the contents of the
cloud "positions" file to a new format comprising the barycentric
co-ordinates and other cell position-based info. This broke
backwards compatibility, providing no option to restart old cases
(v1706 and earlier), and caused difficulties for dependent code, e.g.
for post-processing utilities that could only infer the contents only
after reading.
The barycentric position info is now written to a file called
"coordinates" with provision to restart old cases for which only the
"positions" file is available. Related utilities, e.g. for parallel
running and data conversion have been updated to be able to support both
file types.
To write the "positions" file by default, use set the following option
in the InfoSwitches section of the controlDict:
writeLagrangianPositions 1;
Tracking data classes are no longer templated on the derived cloud type.
The advantage of this is that they can now be passed to sub models. This
should allow continuous phase data to be removed from the parcel
classes. The disadvantage is that every function which once took a
templated TrackData argument now needs an additional TrackCloudType
argument in order to perform the necessary down-casting.
Original commit message:
------------------------
Parallel IO: New collated file format
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
terms of the local barycentric coordinates of the current tetrahedron,
rather than the global coordinate system.
Barycentric tracking works on any mesh, irrespective of mesh quality.
Particles do not get "lost", and tracking does not require ad-hoc
"corrections" or "rescues" to function robustly, because the calculation
of particle-face intersections is unambiguous and reproducible, even at
small angles of incidence.
Each particle position is defined by topology (i.e. the decomposed tet
cell it is in) and geometry (i.e. where it is in the cell). No search
operations are needed on restart or reconstruct, unlike when particle
positions are stored in the global coordinate system.
The particle positions file now contains particles' local coordinates
and topology, rather than the global coordinates and cell. This change
to the output format is not backwards compatible. Existing cases with
Lagrangian data will not restart, but they will still run from time
zero without any modification. This change was necessary in order to
guarantee that the loaded particle is valid, and therefore
fundamentally prevent "loss" and "search-failure" type bugs (e.g.,
2517, 2442, 2286, 1836, 1461, 1341, 1097).
The tracking functions have also been converted to function in terms
of displacement, rather than end position. This helps remove floating
point error issues, particularly towards the end of a tracking step.
Wall bounded streamlines have been removed. The implementation proved
incompatible with the new tracking algorithm. ParaView has a surface
LIC plugin which provides equivalent, or better, functionality.
Additionally, bug report <https://bugs.openfoam.org/view.php?id=2517>
is resolved by this change.
- use allocator class to wrap the stream pointers instead of passing
them into ISstream, OSstream and using a dynamic cast to delete
then. This is especially important if we will have a bidirectional
stream (can't delete twice!).
STYLE:
- file stream constructors with std::string (C++11)
- for rewind, explicit about in|out direction. This is not currently
important, but avoids surprises with any future bidirectional access.
- combined string streams in StringStream.H header.
Similar to <sstream> include that has both input and output string
streams.
Community contribution from Johan Roenby, DHI
IsoAdvector is a geometric Volume-of-Fluid method for advection of a
sharp interface between two incompressible fluids. It works on both
structured and unstructured meshes with no requirements on cell shapes.
IsoAdvector is as an alternative choice for the interface compression
treatment with the MULES limiter implemented in the interFoam family
of solvers.
The isoAdvector concept and code was developed at DHI and was funded
by a Sapere Aude postdoc grant to Johan Roenby from The Danish Council
for Independent Research | Technology and Production Sciences (Grant-ID:
DFF - 1337-00118B - FTP).
Co-funding is also provided by the GTS grant to DHI from the Danish
Agency for Science, Technology and Innovation.
The ideas behind and performance of the isoAdvector scheme is
documented in:
Roenby J, Bredmose H, Jasak H. 2016 A computational method for sharp
interface advection. R. Soc. open sci. 3: 160405.
[http://dx.doi.org/10.1098/rsos.160405](http://dx.doi.org/10.1098/rsos.160405)
Videos showing isoAdvector's performance with a number of standard
test cases can be found in this youtube channel:
https://www.youtube.com/channel/UCt6Idpv4C8TTgz1iUX0prAA
Project contributors:
* Johan Roenby <jro@dhigroup.com> (Inventor and main developer)
* Hrvoje Jasak <hrvoje.jasak@fsb.hr> (Consistent treatment of
boundary faces including processor boundaries, parallelisation,
code clean up
* Henrik Bredmose <hbre@dtu.dk> (Assisted in the conceptual
development)
* Vuko Vukcevic <vuko.vukcevic@fsb.hr> (Code review, profiling,
porting to foam-extend, bug fixing, testing)
* Tomislav Maric <tomislav@sourceflux.de> (Source file
rearrangement)
* Andy Heather <a.heather@opencfd.co.uk> (Integration into OpenFOAM
for v1706 release)
See the integration repository below to see the full set of changes
implemented for release into OpenFOAM v1706
https://develop.openfoam.com/Community/Integration-isoAdvector