|
|
[3_Code-basics.md](uploads/40e373b1d092ec6082961d31a52d77c4/3_Code-basics.md) |
|
|
![image](../logo/png/logo_short_small.png) `4`
|
|
|
|
|
|
Code basics
|
|
|
-----------
|
|
|
|
|
|
subsections
|
|
|
|
|
|
!\[Functionality overview\](code-basics/\#functionality-overview)
|
|
|
|
|
|
### Functionality overview
|
|
|
|
|
|
NIRVANA is a multi-physics, multi-species, multi-scale simulation code.
|
|
|
It implements the following physics system:
|
|
|
|
|
|
- ideal hydrodynamics (HD) and magnetohydrodynamics (MHD) for
|
|
|
non-relativistic, compressible flows
|
|
|
|
|
|
- diffusion processes of fluid viscosity, thermal conduction and Ohmic
|
|
|
dissipation
|
|
|
|
|
|
- ambipolar diffusion
|
|
|
|
|
|
- selfgravity
|
|
|
|
|
|
- body force
|
|
|
|
|
|
- external cooling/heating function
|
|
|
|
|
|
- passive tracer advection
|
|
|
|
|
|
- multi-species advection-reaction
|
|
|
|
|
|
The physical processes and their underlying equations are described in
|
|
|
more detail in the physics guide
|
|
|
([Wiki](https://gitlab.aip.de/ziegler/NIRVANA/-/wikis/A-PhysicsGuide),
|
|
|
[PDF](https://gitlab.aip.de/ziegler/NIRVANA/doc/pdf/PhysicsGuide.pdf)).
|
|
|
|
|
|
The multi-species advection framework is complemented by the NIRVANA
|
|
|
Chemistry&Cooling Model (NCCM) – a comprehensive network of chemical
|
|
|
reactions and microphysical processes designed to model the interstellar
|
|
|
medium (ISM). The NCCM is described in the chemistry guide
|
|
|
([Wiki](https://gitlab.aip.de/ziegler/NIRVANA/-/wikis/B-ChemistryGuide),
|
|
|
[PDF](https://gitlab.aip.de/ziegler/NIRVANA/doc/pdf/ChemistryGuide.pdf)).
|
|
|
|
|
|
In full generality the multi-physics, multi-species system is governed
|
|
|
by hyperbolic-parabolic-elliptic Partial Differential Equations (PDE)
|
|
|
with source terms, and supplemented by a set of Ordinary Differential
|
|
|
Equations (ODE) representing the chemo-thermal rate equations in the
|
|
|
NCCM. This highly complex system demands for robust numerical schemes
|
|
|
and time integrators for the different mathematical parts of the
|
|
|
equations. The numerical methods applied in the NIRVANA code are
|
|
|
documented in the numerics guide
|
|
|
([Wiki](https://gitlab.aip.de/ziegler/NIRVANA/-/wikis/C-NumericsGuide),
|
|
|
[PDF](https://gitlab.aip.de/ziegler/NIRVANA/doc/pdf/NumericsGuide.pdf)).
|
|
|
|
|
|
#### Coordinate systems
|
|
|
|
|
|
NIRVANA works for Cartesian, cylindrical and spherical grids in two
|
|
|
space dimensions (2D) and in three space dimensions (3D). Algorithms are
|
|
|
formulated in a covariant fashion with the vector of metric scale
|
|
|
factors, (*h*<sub>*x*</sub>, *h*<sub>*y*</sub>, *h*<sub>*z*</sub>),
|
|
|
determining the coordinate system with canonical coordinates
|
|
|
(*x*, *y*, *z*) as summarized in the following table.
|
|
|
|
|
|
| | | | |
|
|
|
|:--------------------------|:------------|:----------------------|:--------------------------------------------------------------------------------------------------------------------|
|
|
|
| | geometry | canonical coordinates | metric scale factors |
|
|
|
| | | (*x*, *y*, *z*) | (*h*<sub>*x*</sub>, *h*<sub>*y*</sub>(*x*), *h*<sub>*z*</sub> = *h*<sub>*y*</sub>(*x*) ⋅ *h*<sub>*zy*</sub>(*y*)) |
|
|
|
| ![image](../pic/CART.png) | CARTESIAN | (*x*, *y*, *z*) | (1, 1, 1) |
|
|
|
| ![image](../pic/CYL.png) | CYLINDRICAL | (*z*, *R*, *ϕ*) | (1, 1, *R*) |
|
|
|
| ![image](../pic/SPH.png) | SPHERICAL | (*r*, *θ*, *ϕ*) | (1, *r*, *r* ⋅ sin *θ*) |
|
|
|
|
|
|
The grid is assumed evenly spaced in canonical coordinates with cell
|
|
|
sizes (or resolutions)
|
|
|
*δ* *x*<sup>(0)</sup> = *L*<sub>*x*</sub>/*N*<sub>*x*</sub>,
|
|
|
*δ* *y*<sup>(0)</sup> = *L*<sub>*y*</sub>/*N*<sub>*y*</sub> and
|
|
|
*δ* *z*<sup>(0)</sup> = *L*<sub>*z*</sub>/*N*<sub>*z*</sub> in the
|
|
|
*x*-,*y*- and *z*-direction, respectively. *L*<sub>*x*</sub>,
|
|
|
*L*<sub>*y*</sub> and *L*<sub>*z*</sub> are the lengths of the
|
|
|
computation domain and *N*<sub>*x*</sub>, *N*<sub>*y*</sub> and
|
|
|
*N*<sub>*z*</sub> are the corresponding numbers of grid cells.
|
|
|
Non-equidistant grid spacings are currently not possible.
|
|
|
|
|
|
*Note 1: In cylindrical geometry the first canonical coordinate is the
|
|
|
coordinate *z* along the cylindrical axis.*
|
|
|
|
|
|
*Note 2: Generic 1D simulations are not possible with NIRVANA but can be
|
|
|
realized with a slab-symmetric 2D setup, in principle.*
|
|
|
|
|
|
#### Distributed memory simulations
|
|
|
|
|
|
NIRVANA allows for distributed-memory (parallel) simulations on compute
|
|
|
clusters based on the Message Passing Interface (MPI). The underlying
|
|
|
concepts of parallelism and load balancing are described in some more
|
|
|
detail .
|
|
|
|
|
|
#### Adaptive mesh refinement
|
|
|
|
|
|
NIRVANA features the possibility of multi-scale simulations by applying
|
|
|
the technique of Adaptive Mesh Refinement (AMR). AMR works in all
|
|
|
offered geometries and in combination with MPI. A description of the
|
|
|
underlying ideas can be found .
|
|
|
|
|
|
#### The TESTFIELDS infrastructure
|
|
|
|
|
|
NIRVANA supports a special functionality called TESTFIELDS. TESTFIELDS
|
|
|
provides the AMR- and MPI infrastructure for the additional solution of
|
|
|
a number of induction-like equations as they appear in the so-called
|
|
|
testfield method. For that purpose TESTFIELDS allocates arrays for
|
|
|
necessary testfield variables. Hereby, a testfield variable consists of
|
|
|
a pair of vectors. The first vector describes the actual testfield
|
|
|
fluctuations which obey an induction equation subject to the
|
|
|
solenoidality condition. The components of the fluctuation vector are
|
|
|
assumed face-centroid analog to the magnetic field components. The
|
|
|
second vector is for the corresponding electromotive force (EMF) under
|
|
|
the curl operator in the induction equation. The EMF components are
|
|
|
assumed edge-centered analog to the electric field.
|
|
|
|
|
|
*Note: The TESTFIELDS infrastructure does not implement any numerical
|
|
|
method to solve induction equations for testfield fluctuations.*
|
|
|
|
|
|
### Dynamic arrays
|
|
|
|
|
|
The NIRVANA infrastructure implements its own proprietary routines for
|
|
|
dynamic array allocation. The available array allocation- and
|
|
|
deallocation functions are collected in the module `utilAF.c`. Dynamic
|
|
|
arrays are essential in the code usage. In order to demonstrate how to
|
|
|
apply these functionality the code fragment
|
|
|
|
|
|
int nx,ny,nz;
|
|
|
int *v;
|
|
|
double **A2;
|
|
|
double ***A3;
|
|
|
|
|
|
nx=100;
|
|
|
ny=100;
|
|
|
nz=100;
|
|
|
|
|
|
/* ARRAY ALLOCATION */
|
|
|
v=Arrayi(nx);
|
|
|
A2=Array2(nx,ny);
|
|
|
A3=Array3(nx,ny,nz);
|
|
|
|
|
|
...
|
|
|
|
|
|
/* ARRAY DEALLOCATION */
|
|
|
free(v);
|
|
|
free_Array2(A2);
|
|
|
free_Array3(A3);
|
|
|
|
|
|
for instance, creates
|
|
|
|
|
|
- \(i\) a vector `v[ix]`, `ix`=0...`nx`, of `int` type with `nx`+1
|
|
|
elements using the function `Arrayi()`,
|
|
|
|
|
|
- \(ii\) a 2D array `A2[iy][ix]`, `ix`=0...`nx`,`iy`=0...`ny`, of
|
|
|
`double` type with (`nx`+1)×(`ny`+1) elements using the function
|
|
|
`Array2()`,
|
|
|
|
|
|
- \(iii\) a 3D array `A3[iz][iy][ix]`,
|
|
|
`ix`=0...`nx`,`iy`=0...`ny`,`iz`=0...`nz`, of `double` type with
|
|
|
(`nx`+1)×(`ny`+1)×(`nz`+1) elements using the function `Array3()`
|
|
|
|
|
|
and subsequently deallocates them.
|
|
|
|
|
|
*Note 1: The array index ordering is inverse to the argument list in the
|
|
|
array allocation functions.*
|
|
|
|
|
|
*Note 2: Allocation of an array must always be followed by its
|
|
|
deallocation when it is no longer used in order to free memory
|
|
|
resources.*
|
|
|
|
|
|
*Note 3: The module `utilAF.c` only implements array functions of such
|
|
|
dimensionality and type as really needed so far in NIRVANA.*
|
|
|
|
|
|
### Mesh data structure
|
|
|
|
|
|
Knowledge of the data structure representing the numerical mesh is
|
|
|
important for coding intial conditions (IC) and other user interfaces
|
|
|
like non-standard boundary conditions (BC). In general, the mesh is a
|
|
|
set of individual, logically rectangular grid blocks (also called
|
|
|
superblocks in the following) of different size and resolution which are
|
|
|
organized in a system of linked lists. The system of linked lists,
|
|
|
representing the mesh, is led by a master mesh pointer, `gm`, which is
|
|
|
function argument to many user interfaces like `configUser()` (defining
|
|
|
IC). Technically, `gm` is a doubly pointer of struct type `GRD`, i.e.
|
|
|
`*GRD`. The `GRD` struct type is declared in the header file
|
|
|
`nirvana.h`. Dereferencing `gm` to `gm[l]` gives the first superblock in
|
|
|
a linked list which collects all superblocks belonging to mesh
|
|
|
refinement level *l*. `gm[l]`, like any grid block, is of `GRD` type,
|
|
|
i.e., a pointer to struct `GRD`. The base level *l* = 0 starting with
|
|
|
pointer `gm[0]` is special because it spans the computational domain.
|
|
|
The superblocks making up a refinement level *l* can be reached
|
|
|
consecutively going through the corresponding linked list. Starting with
|
|
|
`gm[l]` the next grid block in the list is obtained with the ’next grid’
|
|
|
operator `->next`. The linked list ends if the next grid pointer gives
|
|
|
the NULL pointer. The following image illustrates the linked list
|
|
|
concept.
|
|
|
|
|
|
![image](../pic/linked-list-concept.png)
|
|
|
|
|
|
Denoting the variable for a superblock pointer as `g` looping over mesh
|
|
|
refinement level *l* (index `il`) is as simple as
|
|
|
|
|
|
for(g=gm[il]; (g); g=g->next) /* LOOP OVER SUPERBLOCKS; STOPS IF g==NULL */
|
|
|
{
|
|
|
...
|
|
|
}
|
|
|
|
|
|
and looping over the full mesh is as simple as
|
|
|
|
|
|
for(il=0; il<=_C.level_all; il++) /* LOOP OVER REFINEMENT LEVELS */
|
|
|
for(g=gm[il]; (g); g=g->next) /* LOOP OVER SUPERBLOCKS; STOPS IF g==NULL */
|
|
|
{
|
|
|
...
|
|
|
}
|
|
|
|
|
|
where `_C.level_all` is a global parameter storing the highest
|
|
|
refinement level at a time.
|
|
|
|
|
|
*Note 1: In uniform grid simulations there is just a base level
|
|
|
represented by the linked list `gm[0]`.*
|
|
|
|
|
|
*Note 2: In distributed-memory simulations each MPI thread manages its
|
|
|
own master mesh pointer `gm` representing the mesh portion of the
|
|
|
partition. In particular, this means that a thread’s `gm[0]` only covers
|
|
|
a subset of the domain-spanning base level.*
|
|
|
|
|
|
*Note 3: In addition to `gm` NIRVANA implements another grid hierarchy
|
|
|
imaging the numerical mesh. This dual hierarchy is represented by the
|
|
|
global master mesh pointer `_G0` (of `*GRD` type). `_G0` is of minor (or
|
|
|
none) importance for the user. It is important, however, for inherent
|
|
|
code tasks mainly in the mesh refinement-, mesh repartitioning and mesh
|
|
|
sychronization algorithms. In contrast to a superblock with variable
|
|
|
size in `gm` grid blocks in `_G0` are generic: it have fixed size of 4
|
|
|
grid cells per space dimension and do not provide extra ghost cells for
|
|
|
adding boundary values like superblocks. The `_G0` generic grid block
|
|
|
hierarchy resembles an (incomplete) oct tree structure. Both mesh
|
|
|
descriptions, `gm` and `_G0`, make use of the same struct type `GRD`.*
|
|
|
|
|
|
*Note 4: The `gm`-mesh is created before integration out of the
|
|
|
`_G0`-mesh by refinement-level-wise clustering of generic grid blocks to
|
|
|
superblocks. Superblocks automatically get ghost cells to include
|
|
|
boundary values. After integration the solution is copied onto the
|
|
|
`_G0`-mesh and the `gm`-mesh is destroyed.*
|
|
|
|
|
|
The struct type `GRD` subsumes all information defining a grid block
|
|
|
(superblock or generic): the grid block size and cell spacings, metric
|
|
|
measures, various cell coordinates, array pointers for primary
|
|
|
variables, derived variables and others, as well as several pointers for
|
|
|
linkage to other grid blocks or objects. An element `V` in `GRD` for a
|
|
|
grid block instance `g` (of type `GRD`) is accessible through
|
|
|
dereferencing `g->V`, e.g,
|
|
|
|
|
|
| | |
|
|
|
|------------------------------------:|:--------------------------------------------------------------|
|
|
|
| `g->next` | is the next grid block relative to `g` in the grid block list |
|
|
|
| `g->rho[iz][iy][ix]` | is the mass density at `g`-index (`ix`,`iy`,`iz`) |
|
|
|
| `g->x[ix]`,`g->y[iy]`,`g->z[iz]` | are cell-nodal coordinates at `g`-index (`ix`,`iy`,`iz`) |
|
|
|
| `g->xc[ix]`,`g->yc[iy]`,`g->zc[iz]` | are cell-centroid coordinates at `g`-index (`ix`,`iy`,`iz`) |
|
|
|
|
|
|
The following is a complete listing of the elements V of struct `GRD`
|
|
|
grouped into the classes: grid connectivity, physical variables (primary
|
|
|
and derived), auxiliary arrays, grid metrics and grid attributes. Vector
|
|
|
and array elements in `GRD` are dynamic structures which are allocated
|
|
|
when a grid block is created. The array dimensionality is seen by the
|
|
|
number of `[]`-brackets. The indicated vector- and array indices refer
|
|
|
to:
|
|
|
|
|
|
| index | meaning |
|
|
|
|---------------------:|:-----------------------------------------------------------------------|
|
|
|
| `ix`,`iy`,`iz` | grid index in *x*, *y*, *z*-direction |
|
|
|
| `ibx`,`iby`,`ibz` | generic grid block index in *x*, *y*, *z*-direction on superblock |
|
|
|
| `ibcx`,`ibcy`,`ibcz` | child grid block index in *x*, *y*, *z*-direction on parent grid block |
|
|
|
| `iu` | physical variable counter |
|
|
|
| `is` | species counter *s* = 0, *N*<sub>*s*</sub> − 1 |
|
|
|
| `ic` | tracers counter *c* = 0, *N*<sub>*c*</sub> − 1 |
|
|
|
| `it` | testfield fluctuation variables counter *t* = 0, *N*<sub>*t*</sub> − 1 |
|
|
|
|
|
|
The important elements for the user are the primary physical variables
|
|
|
and the grid coordinates which are most relevant for implementing user
|
|
|
interfaces like `configUser.c`.
|
|
|
|
|
|
Elements V of struct `GRD`:
|
|
|
|
|
|
| V (grid connectivity) | description |
|
|
|
|:------------------------------------|:---------------------------------------------------------|
|
|
|
| `next` | next grid block pointer in the linked list |
|
|
|
| `gc[ibcz][ibcy][ibcx]` | child grid block pointers of parent grid block |
|
|
|
| `gb[ibz][iby][ibx]` | superblock-associated generic grid block pointers |
|
|
|
| `gp` | pointer to the parent generic grid block |
|
|
|
| `gs` | pointer to a generic grid block’s associated superblock |
|
|
|
| `gxl`,`gxu`,`gyl`,`gyu`,`gzl`,`gzu` | pointers to adjacent generic grid blocks of a grid block |
|
|
|
| `ol` | pointer to a grid block-connected object list |
|
|
|
|
|
|
| V (primary physical variables) | description |
|
|
|
|:---------------------------------------------------|:------------------------------------------------------------------------------------|
|
|
|
| `rho[iz][iy][ix]` | mass density 𝜚 |
|
|
|
| `mx[iz][iy][ix]` | *x*-momentum density *m*<sub>*x*</sub> |
|
|
|
| `my[iz][iy][ix]` | *y*-momentum density *m*<sub>*y*</sub> |
|
|
|
| `mz[iz][iy][ix]` | (canonical) *z*-momentum density *m*<sub>*z*</sub> |
|
|
|
| `e[iz][iy][ix]` | total energy density *e* |
|
|
|
| `bx[iz][iy][ix]`,`by[iz][iy][ix]`,`bz[iz][iy][ix]` | magnetic field components (*B*<sub>*x*</sub>, *B*<sub>*y*</sub>, *B*<sub>*z*</sub>) |
|
|
|
| `phi[iz][iy][ix]` | gravitational potential *Φ* |
|
|
|
| `nX[is][iz][iy][ix]` | species number densities *n*<sub>*s*</sub> |
|
|
|
| `C[ic][iz][iy][ix]` | tracer variables *C*<sub>*c*</sub> |
|
|
|
| `tfx[it][iz][iy][ix]` | *x*-component *b*<sub>*x*</sub><sub>*t*</sub> of testfields fluctuations |
|
|
|
| `tfy[it][iz][iy][ix]` | *y*-component *b*<sub>*y*</sub><sub>*t*</sub> of testfields fluctuations |
|
|
|
| `tfz[it][iz][iy][ix]` | *z*-component *b*<sub>*z*</sub><sub>*t*</sub> of testfields fluctuations |
|
|
|
|
|
|
| V (derived physical variables) | description |
|
|
|
|:---------------------------------------------------|:------------------------------------------------------------------------------------|
|
|
|
| `et[iz][iy][ix]` | thermal energy density *ε* |
|
|
|
| `T[iz][iy][ix]` | temperature *T* |
|
|
|
| `ne[iz][iy][ix]` | electron number density *n*<sub>*e*</sub> |
|
|
|
| `ex[iz][iy][ix]`,`ey[iz][iy][ix]`,`ez[iz][iy][ix]` | electric field components (*E*<sub>*x*</sub>, *E*<sub>*y*</sub>, *E*<sub>*z*</sub>) |
|
|
|
| `etfx[it][iz][iy][ix]` | *x*-component of testfields EMFs |
|
|
|
| `etfy[it][iz][iy][ix]` | *y*-component of testfields EMFs |
|
|
|
| `etfz[it][iz][iy][ix]` | *z*-component of testfields EMFs |
|
|
|
| `fx[iu][iz][iy][ix]` | *x*-component of MHD flux function |
|
|
|
| `fy[iu][iz][iy][ix]` | *y*-component of MHD flux function |
|
|
|
| `fz[iu][iz][iy][ix]` | *z*-component of MHD flux function |
|
|
|
| `bsqr[iz][iy][ix]` | square of magnetic field (**B**<sup>2</sup>) |
|
|
|
|
|
|
| V (auxiliary arrays) | description |
|
|
|
|:------------------------------------------------------|:-----------------------------------------|
|
|
|
| `U[iu]` | pointers to physical variables arrays |
|
|
|
| `U0[iu][iz][iy][ix]`,`U1[iu][iz][iy][ix]` | solver helper variables |
|
|
|
| `U2[iu][iz][iy][ix]`,`U3[iu][iz][iy][ix]` | solver helper variables |
|
|
|
| `bx0[iz][iy][ix]`,`by0[iz][iy][ix]`,`bz0[iz][iy][ix]` | magnetic field helper variables (0) |
|
|
|
| `bx1[iz][iy][ix]`,`by1[iz][iy][ix]`,`bz1[iz][iy][ix]` | magnetic field helper variables (1) |
|
|
|
| `bx2[iz][iy][ix]`,`by2[iz][iy][ix]`,`bz2[iz][iy][ix]` | magnetic field helper variables (2) |
|
|
|
| `tfx1[it][iz][iy][ix]` | testfields fluctuations helper variables |
|
|
|
| `tfy1[it][iz][iy][ix]` | testfields fluctuations helper variables |
|
|
|
| `tfz1[it][iz][iy][ix]` | testfields fluctuations helper variables |
|
|
|
| `u[iz][iy][ix]`,`u0[iz][iy][ix]`,`u1[iz][iy][ix]` | auxiliary variables |
|
|
|
| `u2[iz][iy][ix]`,`u3[iz][iy][ix]` | auxiliary variables |
|
|
|
| `dt[iz][iy][ix]` | internal ODE solver timestep |
|
|
|
|
|
|
| V (grid metrics) | description |
|
|
|
|:---------------------------|:----------------------------------------------------------------------------------------------------|
|
|
|
| `dv[iy][ix]` | volume *δ* *V* ( = *δ* *V*<sub>*x*</sub> ⋅ *δ* *V*<sub>*y*</sub> ⋅ *δ* *z*) of a numerical cell |
|
|
|
| `dvx[ix]` | *x*-dependent part, *δ* *V*<sub>*x*</sub>, of cell volume |
|
|
|
| `dvy[iy]` | *y*-dependent part, *δ* *V*<sub>*y*</sub>, of cell volume |
|
|
|
| `dvyf[iy]` | *y*-dependent part, *δ* *V*<sub>*y*</sub>, of cell volume at face-centroid coordinates |
|
|
|
| `dax[ix]` | vector storing |
|
|
|
| `hy[ix]` | metric scale factor *h*<sub>*y*</sub>(*x*) at cell-nodal coordinates |
|
|
|
| `hyh[ix]` | metric scale factor *h*<sub>*y*</sub>(*x*) at cell-center coordinates |
|
|
|
| `hyc[ix]` | metric scale factor *h*<sub>*y*</sub>(*x*) at cell-centroid coordinates |
|
|
|
| `hyf[ix]` | metric scale factor *h*<sub>*y*</sub>(*x*) at face-centroid coordinates |
|
|
|
| `hzy[iy]` | metric scale factor *h*<sub>*zy*</sub>(*y*) at cell-nodal coordinates |
|
|
|
| `hzyh[iy]` | metric scale factor *h*<sub>*zy*</sub>(*y*) at cell-center coordinates |
|
|
|
| `hzyc[iy]` | metric scale factor *h*<sub>*zy*</sub>(*y*) at cell-centroid coordinates |
|
|
|
| `dxhy[ix]` | derivative *dh*<sub>*y*</sub>/*dx* at nodal coordinates |
|
|
|
| `dyhzy[iy]` | derivative *dh*<sub>*zy*</sub>/*dy* at cell-nodal coordinates |
|
|
|
| `dyhzyh[iy]` | derivative *dh*<sub>*zy*</sub>/*dy* at cell-center coordinates |
|
|
|
| `dyhzyc[iy]` | derivative *dh*<sub>*zy*</sub>/*dy* at cell-centroid coordinates |
|
|
|
| `x[ix]`,`y[iy]`,`z[iz]` | cell-nodal coordinates |
|
|
|
| `xc[ix]`,`yc[iy]`,`zc[iz]` | cell-centroid coordinates |
|
|
|
| `xf[ix]` | face-centroid *x*-coordinate of *xy*(*xz*)-cell faces |
|
|
|
| `dx`,`dy`,`dz` | cell spacings |
|
|
|
| `dxi`,`dyi`,`dzi` | inverse of cell spacings |
|
|
|
| `nx`,`ny`,`nz` | last grid cell index in a grid block |
|
|
|
| `nx1`,`ny1`,`nz1` | nx-1,ny-1,nz-1 |
|
|
|
| `ixs`,`iys`,`izs` | grid block starting index of active cells |
|
|
|
| `ixe`,`iye`,`ize` | grid block ending index of active cells |
|
|
|
| `nbx`,`nby`,`nbz` | superblock size in terms of generic block size minus 1 |
|
|
|
|
|
|
| V (grid attributes) | description |
|
|
|
|:--------------------|:---------------------------------------------------------------|
|
|
|
| `lev` | grid block refinement level |
|
|
|
| `id[3]` | grid block id coordinates |
|
|
|
| `pos[3]` | grid index of a generic block within parent block (AMR) |
|
|
|
| `posc[3]` | grid index of a generic block within its associated superblock |
|
|
|
|
|
|
A cell with index (`ix`,`iy`,`iz`) of a grid block `g` spans the domain
|
|
|
\[`g->x[ix]`,`g->x[ix+1]`\]×\[`g->y[iy]`,`g->y[iy+1]`\]×\[`g->z[iz]`,`g->z[iz+1]`\]
|
|
|
where (`g->x[ix]`,`g->y[iy]`,`g->z[iz]`) are the coordinates of the
|
|
|
lower cell node. Besides cell-nodal coordinates cell-centroid
|
|
|
coordinates locating the center of cell volume and face-centroid
|
|
|
coordinates locating the center of cell faces are of importance. Their
|
|
|
definition for cylindrical- and spherical geometry is given in the table
|
|
|
below where *Δ* <sub>`i`*x*</sub> (*Δ* <sub>`i`*y*</sub>) denotes the
|
|
|
difference of a quantity evaluated at `ix`+1 (`iy`+1) and `ix` (`iy`).
|
|
|
Formal half-index subscripts mean that a quantity is to be evaluated at
|
|
|
the geometric cell center. Corresponding coordinates are called
|
|
|
cell-centered.
|
|
|
|
|
|
| location | `g->` | definition: cylindrical | definition: spherical |
|
|
|
|:------------------------------|:-------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
|
|
|
| cell-centroid x | `xc[ix]` | *z*<sub>`i`*x* + 1/2</sub> | 3*Δ* <sub>`i`*x*</sub>*r*<sup>4</sup>/(4*Δ*<sub>`i`*x*</sub>*r*<sup>3</sup>) |
|
|
|
| cell-centroid y | `yc[iy]` | 2*Δ* <sub>`i`*y*</sub>*R*<sup>3</sup>/(3*Δ* <sub>`i`*y*</sub>*R*<sup>2</sup>) | *Δ* <sub>`i`*y*</sub>(*θ*cos *θ* − sin *θ*)/*Δ* <sub>`i`*y*</sub>cos *θ* |
|
|
|
| *xy*/*xz*-face-centroid x | `xf[ix]` | *z*<sub>`i`*x* + 1/2</sub> | 2*Δ* <sub>`i`*x*</sub>*r*<sup>3</sup>/(3*Δ* <sub>`i`*x*</sub>*r*<sup>2</sup>) |
|
|
|
| *yx*-face-centroid y | (≡) `yc[iy]` | 2*Δ* <sub>`i`*y*</sub>*R*<sup>3</sup>/(3*Δ* <sub>`i`*y*</sub>*R*<sup>2</sup>) | *Δ* <sub>`i`*y*</sub>(*θ*cos *θ* − sin *θ*)/*Δ* <sub>`i`*y*</sub>cos *θ* |
|
|
|
|
|
|
*Note 5: Other cell/face-centroid coordinates not listed in the table
|
|
|
are identical to their cell/face-centered coordinates.*
|
|
|
|
|
|
*Note 6: In Cartesian geometry all cell/face-centroid coordinates are
|
|
|
identical to their cell/face-centered counterparts.*
|
|
|
|
|
|
### Adaptive mesh refinement
|
|
|
|
|
|
The mesh refinement algorithm relies on the oct-tree data structure
|
|
|
`_G0` (of type `*GRD`) which represents a hierarchy of nested generic
|
|
|
grid blocks with a generic grid block having a fixed number of 4 grid
|
|
|
cells per coordinate direction independent of the refinement level. A
|
|
|
generic grid block of refinement level *l* has grid spacings
|
|
|
(*δ* *x*<sup>(*l*)</sup>, *δ* *y*<sup>(*l*)</sup>, *δ* *z*<sup>(*l*)</sup>) = (*δ* *x*<sup>(0)</sup>, *δ* *y*<sup>(0)</sup>, *δ* *z*<sup>(0)</sup>)/2<sup>*l*</sup>
|
|
|
where
|
|
|
(*δ* *x*<sup>(0)</sup>, *δ* *y*<sup>(0)</sup>, *δ* *z*<sup>(0)</sup>)
|
|
|
is the grid spacing of the base level, i.e., there is a doubling of
|
|
|
resolution from one refinement level to the next higher.
|
|
|
|
|
|
Starting on the base level (*l* = 0) mesh refinements are realized by
|
|
|
overlaying child grid blocks at locations which need refinement
|
|
|
according to different refinement criteria discussed in more detail
|
|
|
below. A child grid block covers one octant (a logical cube of 2 grid
|
|
|
cell length) of its parent grid block. Grid blocks can be arbitrarily
|
|
|
nested up to some maximum refinement level given by the
|
|
|
user-controllable code parameter `_C.amr`.
|
|
|
|
|
|
AMR is controlled by different types of refinement criteria which can be
|
|
|
individually selected by the user. The standard implementation covers
|
|
|
criteria which are
|
|
|
|
|
|
- derivatives-based (most important in practice):
|
|
|
$$\\left\[\\alpha\\frac{\|\\delta\\,\\!U\|}{\|U\|+U\_\\mathrm{ref}}+(1-\\alpha)
|
|
|
\\frac{\|\\delta\\,\\!^2U\|}{\|\\delta\\,\\!U\|+{\\rm FILTER}\\cdot(\|U\|+U\_\\mathrm{ref})}\\right\]
|
|
|
\\left(\\frac{\\delta\\,\\!x^{(l)}}{\\delta\\,\\!x^{(0)}}\\right)^{\\xi}
|
|
|
\\left\\{\\begin{array}{ll}
|
|
|
>\\mathcal{E}\_{U} &\\exists U \\quad\\hbox{refinement}\\\\
|
|
|
<0.8\\mathcal{E}\_{U} & \\forall U \\quad\\hbox{derefinement}\\\\
|
|
|
\\end{array}\\right.$$
|
|
|
|
|
|
The criterion is applied to primary (or related) physical variables
|
|
|
*U* ( = {𝜚, **m**, *e*, **B**, *C*<sub>*c*</sub>}). Undivided first
|
|
|
(*δ* *U*) and second (*δ* <sup>2</sup>*U*) differences are
|
|
|
computed along various spatial directions. The switch *α* ∈ \[0, 1\]
|
|
|
(code parameter: `_C.amr_d1`) selects between a purely
|
|
|
gradient-based criterion (*α* = 1) and a purely
|
|
|
second-derivatives-based criterion (*α* = 0).
|
|
|
*U*<sub>*r**e**f*</sub> are reference values to avoid division by
|
|
|
zero for indefinite variables. The exponent *ξ* ∈ \[0, 2\] (code
|
|
|
parameter: `_C.amr_exp`) introduces a power law functional on the
|
|
|
grid spacing ratio *δ* *x*<sup>(*l*)</sup>/*δ* *x*<sup>(0)</sup>
|
|
|
which allows to control the behavior with progressive mesh
|
|
|
refinement. The macro `FILTER` (preset to 10<sup> − 2</sup>) is a
|
|
|
filter to suppress refinement at small-scale solution wiggles. The
|
|
|
individual refinement thresholds are denoted by ℰ<sub>*U*</sub>
|
|
|
(code parameter: `_C.amr_eps[0-4]`).
|
|
|
|
|
|
*Note 1: Mesh refinement takes place if the criterion for refinement
|
|
|
is fulfilled for at least ONE variable *U*. On the other hand, mesh
|
|
|
derefinement takes place if the criterion for derefinement is
|
|
|
fulfilled for ALL variables *U*.*
|
|
|
|
|
|
*Note 2: The criterion is evaluated in the parent grid block octant
|
|
|
to be refined and in a buffer zone around it in order to capture
|
|
|
flow features right in time.*
|
|
|
|
|
|
- Jeans-length-based (important in simulations involving selfgravity):
|
|
|
$$\\left(\\frac{\\pi}{G}\\frac{c\_s^2}{\\varrho}\\right)^{1/2}
|
|
|
\\cdot \\mathcal{E}\_\\mathrm{Jeans}\\left\\{\\begin{array}{ll}
|
|
|
<\\delta\\,\\!s & \\mbox{refinement}\\\\
|
|
|
>1.25\\delta\\,\\!s & \\mbox{derefinement}\\\\
|
|
|
\\end{array}\\right.$$
|
|
|
where the first factor is the local Jeans length with
|
|
|
*c*<sub>*s*</sub> the sound speed and *G* the gravitational
|
|
|
constant, and where
|
|
|
*δ* *s* = min {*δ* *x*, *h*<sub>*y*</sub>*δ* *y*, *h*<sub>*z*</sub>*δ* *z*}.
|
|
|
ℰ<sub>*J**e**a**n**s*</sub> is a user-specific threshold with the
|
|
|
reciprocal value giving the minimum number of grid cells the local
|
|
|
Jeans length should be resolved.
|
|
|
|
|
|
- Field-length-based (potentially relevant for simulations involving a
|
|
|
heatloss source but rarely used in practice):
|
|
|
$$2\\pi \\left(\\frac{\\kappa T}{\\max\\left\\{T\\left(\\frac{\\partial L}{\\partial T}\\right)\_{\\varrho}
|
|
|
-\\varrho \\left(\\frac{\\partial L}{\\partial \\varrho}\\right)\_{T},EPS\\right\\}}
|
|
|
\\right)^{1/2}\\cdot \\mathcal{E}\_\\mathrm{Field}\\left\\{\\begin{array}{ll}
|
|
|
<\\delta\\,\\!s & \\mbox{refinement}\\\\
|
|
|
>1.25\\delta\\,\\!s & \\mbox{derefinement}\\\\
|
|
|
\\end{array}\\right.$$
|
|
|
where the first factor is the Field length with *L*(𝜚, *T*) the
|
|
|
density- and temperature-dependent heatloss function and *κ* the
|
|
|
thermal conduction coefficient. ℰ<sub>*F**i**e**l**d*</sub> is a
|
|
|
user-specific threshold with the reciprocal value giving the minimum
|
|
|
number of grid cells the Field length should be resolved.
|
|
|
|
|
|
### Parallelism and load balancing
|
|
|
|
|
|
The NIRVANA code is parallelized making use of the MPI library. The code
|
|
|
infrastructure provides routines for mesh partitioning in order to
|
|
|
distribute the workload among threads and data transfer routines for
|
|
|
sycronization purposes. The data transfer model is coarse-grained, i.e.,
|
|
|
all data to be communicated between two MPI threads in a syncronization
|
|
|
process is first collected and then transmitted in two separate single
|
|
|
streams representing physical data and associated meta data. Whenever
|
|
|
possible one-to-one MPI communications are done in non-blocking mode.
|
|
|
|
|
|
Three different load balancing schemes are in use in NIRVANA. Two of the
|
|
|
schemes serve to distribute workload among MPI threads for the non-NCCM
|
|
|
part of the solution process (anything but chemo-thermal reactions).
|
|
|
Workload balancing is here approached by an equal distribution of
|
|
|
generic grid blocks assuming that each generic grid block produce the
|
|
|
same computational costs.
|
|
|
|
|
|
The first and simpler of the two non-NCCM schemes uses regular domain
|
|
|
decomposition applicable exclusively to uniform grid simulations. In
|
|
|
regular decomposition the mesh is divided into a number of same-sized
|
|
|
(logically) rectangular submeshes equal to the number of MPI threads.
|
|
|
The dimension of a submesh in any coordinate direction is subject to the
|
|
|
limitation that it must be a multiple of 4 – the size of a generic grid
|
|
|
block.
|
|
|
|
|
|
The second, non-NCCM load balancing scheme is applied to AMR
|
|
|
simulations. Here, mesh partitioning relies on space-filling-curve
|
|
|
mapping techniques. In the space-filling-curve-based scheme the mesh is
|
|
|
partitioned by an equal subdivision of the space-filling-curve interval.
|
|
|
The distributed mesh subdomains are polyhedral bounded but keep optimal
|
|
|
data compactness (surface-to-volume ratio minimized). At the same time,
|
|
|
NIRVANA features inter-level data locality by allowing smart grid block
|
|
|
overlapping (shared block technique) at partition interfaces. This
|
|
|
combination of space-filling-curve mesh partitioning and shared block
|
|
|
method means that the portion of grid hierarchy assigned to any thread
|
|
|
can be traversed without communication at the expense of additional code
|
|
|
complexity.
|
|
|
|
|
|
The above described load balancing strategies are not optimal for the
|
|
|
NCCM part of the solution process. This is because NIRVANA uses an
|
|
|
efficient ODE solver for evolving the thermo-chemical rate equations
|
|
|
based on error-controlled adaptive time-stepping. Hence, the
|
|
|
computational cost to solve the rate equations varies from grid cell to
|
|
|
grid cell which can result in strong workload imbalance between threads
|
|
|
if grid blocks were equally distributed and the NCCM solution step
|
|
|
dominates the total computational cost. To address this issue NIRVANA
|
|
|
switches to a third load balancing approach for the NCCM solution
|
|
|
process. The principle idea behind is to measure the computational cost
|
|
|
for the solution of the rate equations for a conglomerate of grid cells
|
|
|
representing an octant of a generic grid block (as smallest unit) with
|
|
|
help of the MPI timer. The a-posteriori measure thus obtained is then
|
|
|
used as an a-priori estimate for the load balancer in the next NCCM
|
|
|
solver call. The approach is only approximate and works the better the
|
|
|
less changes in density and temperature occur between two successive
|
|
|
NCCM updates. The measured workload residuals on threads (difference
|
|
|
between actual and desired workload per thread) are diminished by an
|
|
|
octant-wise redistribution of relevant data from threads with workload
|
|
|
excess to threads with workload deficiency before integration. After
|
|
|
integration the solution is copied back onto the original mesh.
|
|
|
|
|
|
*Note: The NCCM load balancer comes to its limit when the computational
|
|
|
cost is highly concentrated on only a few grid cells.* |