Several classes and methods are available in the MED library to ease the exchange of information in a parallel context. The DECs (detailed further down) then use those classes to enable the parallel remapping (projection) of a field. For historical reasons, all those items are in the same namespace as the non-parallel MEDCoupling functionalities, ParaMEDMEM.
The core elements of the API are:
In an advanced usage, the topology of the nodes in the computation is accessed through the following elements:
A Data Exchange Channel (DEC) allows the transfer and/or the interpolation (remapping) of field data between several processors in a parallel (MPI) context. Some DECs perform a simple renumbering and copy of the data, and some are capable of functionalities similar to the sequential remapper.
We list here the main characteristics of the DECs, the list being structured in the same way as the class hierarchy:
DisjointDEC
. The projection methodology is based on the algorithms of INTERP_KERNEL, that is to say, they work in a similar fashion than what the sequential remapper does. The following projection methods are supported: P0->P0 (the most common case), P1->P0, P0->P1.DisjointDEC
, but this one is not based on the INTERP_KERNEL algorithms. This DEC does a simple data transfer between two fields having a common (coincident) structured support, but different topologies (i.e. the structured domain is split differently among the processors for the two fields). Only the cell identifiers are handled, and no kind of interpolation (in the sense of the computation of a weight matrix) is performed. It is a "mere" reallocation of data from one domain partitioning to another.Besides, all the DECs inherit from the class DECOptions which provides the necessary methods to adjust the parameters used in the transfer/remapping.
The most commonly used DEC is the InterpKernelDEC
, and here is a simple example to of its usage: