|
|
image |
|
|
\ No newline at end of file |
|
|
j
|
|
|
|
|
|
CONTENTS:
|
|
|
[Log file](3.3-Output-data\#log-file)
|
|
|
[Snapshot files](3.3-Output-data\#snapshot-files)
|
|
|
[Monitoring file](3.3-Output-data\#monitoring-file)
|
|
|
|
|
|
NIRVANA produces three different kind of output data files during a
|
|
|
simulation:
|
|
|
|
|
|
- Log file – `nirvana.log`
|
|
|
|
|
|
- Snapshot files – `NIR#.#` and `NIRLAST.#` with number identifiers
|
|
|
`#`
|
|
|
|
|
|
- Monitoring file – `nirvana.mon`
|
|
|
|
|
|
### Log file
|
|
|
|
|
|
The log file `nirvana.log` is a structured text file. It serves to store
|
|
|
selected simulation parameters and to record the timeline of a
|
|
|
simulation using a brief information profile. The profile contains
|
|
|
runtime parameters like the cycle number, the various timesteps of
|
|
|
physical processes, timing informations, grid statistics, information on
|
|
|
conservation properties as well as global sums of basic physical
|
|
|
quantities like the mass, momenta and energies. `nirvana.log` is updated
|
|
|
every `_C.freq_log` timestep cycles adding a new record. Browsing
|
|
|
through the file allows a quick check whether a simulation run behaves
|
|
|
well or not. In case NIRVANA encounters a *code-controlled* exception
|
|
|
during runtime a message about the type of exception is written to
|
|
|
`nirvana.log` before the job is terminated.
|
|
|
|
|
|
### Snapshot files
|
|
|
|
|
|
A snapshot is a set of files which dumps the state of a simulation at
|
|
|
one point in time represented by the cycle number `_C.model`. Snapshot
|
|
|
files are data containers named `NIR#.#` where the pre-dot-`#` stands
|
|
|
for the value `_C.model` and the post-dot-`#` for the container id. In
|
|
|
serial runs there is just one container, i.e. a snapshot consists of one
|
|
|
file named `NIR#.1`.
|
|
|
|
|
|
In MPI runs NIRVANA dispenses with parallel I/O MPI functionality.
|
|
|
Instead, data of the individual partitions (threads) is written
|
|
|
consecutively in data containers. The total number of containers can be
|
|
|
fixed by the user as command line argument of the NIRVANA executable in
|
|
|
the `mpirun` command (cf. \[Compiling and running MPI
|
|
|
jobs\](2-Getting-started\#compiling-and-running-mpi-jobs)). It can be
|
|
|
any number between 1 (all partitions are stored in one file; serial
|
|
|
writing) and number of threads (each partition in stored in a separate
|
|
|
file; parallel writing). By default, the number of containers equals the
|
|
|
number of threads.
|
|
|
|
|
|
During runtime of a simulation a sequence of snapshots is usually
|
|
|
produced starting with `NIR0.#`, the snapshot cointaining the IC.
|
|
|
Snapshots are written out after every `_C.freq_nir` timestep cycles as
|
|
|
specified by the user in `nirvana.par`, i.e. `_C.freq_nir` denotes the
|
|
|
output frequency in terms of number of timesteps. This generally implies
|
|
|
that the sequence of snapshots does not represent a sequence of
|
|
|
equidistantly time-spaced physical states unless the timestep is
|
|
|
constant throughout.
|
|
|
|
|
|
The purpose of a snapshot is twofold. Firstly, it is used for restarting
|
|
|
a simulation. This becomes necessary when the requested simulation time
|
|
|
is longer than the permitted walltime per single run, e.g. on cluster
|
|
|
systems, or when a simulation fails and is to be restarted after
|
|
|
debugging. Second, snapshots are the input for the data converter
|
|
|
`CAIVS` in order to produce output formats more suitable for
|
|
|
visualization tasks (cf. 4 CAIVS user guide
|
|
|
[Wiki](https://gitlab.aip.de/ziegler/NIRVANA/-/wikis/4-CAIVS-user-guide),
|
|
|
[PDF](https://gitlab.aip.de/ziegler/NIRVANA/-/tree/master/doc/pdf/4_CAIVS-user-guide.pdf)).
|
|
|
|
|
|
In addition, the special files `NIRLAST.#` are generated during runtime
|
|
|
which store the last available snapshot in due course. `NIRLAST.#` is
|
|
|
either renewed every `_C.freq_walltime` seconds as specified by the user
|
|
|
in `nirvana.par` or is overwritten by the latest produced snapshot.
|
|
|
|
|
|
### Monitoring file
|
|
|
|
|
|
The monitoring file `nirvana.mon` is a CSV-like counterpart to
|
|
|
`nirvana.log`. Data is stored in a line-by-line fashion with each line
|
|
|
recording very similar quantities at a time like an entry in
|
|
|
`nirvana.log`. `nirvana.mon` is updated every `_C.freq_log` timestep
|
|
|
cycles like `nirvana.log`. The monitoring file is actually thought as
|
|
|
input file for the following programs which allow a visualization of the
|
|
|
runtime history of a simulation:
|
|
|
|
|
|
- TITVS monitor – a Python-based GUI (currently not available)
|
|
|
|
|
|
- `readMON.pro` – an IDL procedure located in the directory
|
|
|
`/nirvana/caivs/idl`. The procedure requires the filename
|
|
|
`nirvana.mon` as input and returns a structure list `G` as output
|
|
|
|
|
|
IDL> readMON,'$PATH/nirvana.mon',G
|
|
|
|
|
|
`G` represents the timeline of data with `G(it).V` adressing an
|
|
|
element `V` of the `it`-th time record. Current elements are:
|
|
|
|
|
|
| `V` | meaning |
|
|
|
|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|
|
| `MODEL` | timestep cycle number |
|
|
|
| `TIME` | physical time |
|
|
|
| `DT_MHD` | MHD timestep |
|
|
|
| `DT_VISC` | fluid viscosity timestep |
|
|
|
| `DT_DIFF` | Ohmic diffusion timestep |
|
|
|
| `DT_COND` | thermal conduction timestep |
|
|
|
| `DT_APDIFF` | ambipolar diffusion timestep |
|
|
|
| `DT_HTLS` | heatloss timestep |
|
|
|
| `DT_NCCM` | NCCM-related timestep |
|
|
|
| `MASS` | mass |
|
|
|
| `MX`,`MY`,`MZ` | *x*,*y*,*z*-momentum of motion |
|
|
|
| `ETOT` | total energy |
|
|
|
| `ETH` | thermal energy |
|
|
|
| `EGR` | gravitational energy |
|
|
|
| `EKX`,`EKY`,`EKZ` | kinetic energy in *x*,*y*,*z*-component of motion |
|
|
|
| `EMX`,`EMY`,`EMZ` | magnetic energy in *x*,*y*,*z*-component of magnetic field |
|
|
|
| `DMASS` | change in mass |
|
|
|
| `DMX`,`DMY`,`DMZ` | change in momentum components |
|
|
|
| `DETOT` | change in total energy |
|
|
|
| `DIVB_MAX`,`DIVB_AVG` | maximum,average relative change in \|∇ ⋅ **B**\| |
|
|
|
| `DRHOX_MAX`,`DRHOX_AVG` | maximum,average relative change in \|*m*<sub>*u*</sub>∑<sub>*s*</sub>(*μ*<sub>*s*</sub> + *μ*<sub>*e*</sub>*q*<sub>*s*</sub>)*n*<sub>*s*</sub> − 𝜚\|. |
|
|
|
| `MAXLEVEL` | current highest mesh refinement level |
|
|
|
| `CELLS` | total number of visible cells (excluding covered cells) |
|
|
|
| `BLOCKS[il]` | number of generic blocks building refinement level `il` |
|
|
|
| `MRP` | number of current mesh repartitioning events |
|
|
|
| `TI_MHD_COMP` | timings for MHD computation part |
|
|
|
| `TI_MHD_SYNC` | timings for MHD syncronization part |
|
|
|
| `TI_RKL_COMP` | timings for RKL-solver computation part |
|
|
|
| `TI_RKL_SYNC` | timings for RKL-solver syncronization part |
|
|
|
| `TI_GRAV_COMP` | timings for gravity-solver computation part |
|
|
|
| `TI_GRAV_SYNC` | timings for gravity-solver syncronization part |
|
|
|
| `TI_NCCM_COMP` | timings for NCCM-solver computation part |
|
|
|
| `TI_NCCM_SYNC` | timings for NCCM-solver syncronization part |
|
|
|
| `TI_MESH` | timings for mesh operations |
|
|
|
| `TI_IO` | timings for I/O operations |
|
|
|
| `WALLTIME` | current walltime in seconds |
|
|
|
|
|
|
Then, as an example, plotting the MHD timestep as a function of time
|
|
|
reads
|
|
|
|
|
|
IDL> plot,G(*).TIME,G(*).DT_MHD
|
|
|
|
|
|
|
|
|
|
|
|
PREV: [3.2 User interfaces](3.2-User-interfaces) NEXT: [3.4 Restarting a simulation](3.4-Restarting-a-simulation) |