tutorial
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
parent directory.. | ||||
This directory is part of PDAF - the Parallel Data Assimilation Framework. It contains implementations of the offline and online modes of PDAF with a simple 2-dimensional model setup. The implementation is described in detail in the tutorials that are available on the web site of PDAF (http://pdaf.awi.de). The directories are as follows: ---- Input files ---- inputs_offline/ This directory contains input files to execute the example programs for the offline mode of PDAF. inputs_online/ This directory contains input files to execute the example programs for the online mode of PDAF. inputs_online_2fields/ This directory contains input files to execute the example programs for the online mode of PDAF for the model using two fields. ---- OFFLINE mode ---- offline_2D_serial/ contains an implementation of the offline mode without MPI parallelization (domain-decomposition). However, this version supports OpenMP parallelization for the localized filters. offline_2D_parallel/ contains an implementation of the offline mode with MPI-parallelization. Thus, the example can be run with multiple processors. Required is an MPI library like OpenMPI. ---- ONLINE mode ---- online_2D_serialmodel/ contains an implementation of the online mode of PDAF with a serial (i.e. non-parallelized) model. Only the ensemble integration is performed in parallel, while each model and the filter are each executed using a single process. However, this version supports OpenMP parallelization for the localized filters. online_2D_serialmodel_2fields/ contains an implementation of the online mode of PDAF with a serial (i.e. non-parallelized) model using two model fields. Only the ensemble integration is performed in parallel, while each model and the filter are each executed using a single process. online_2D_parallelmodel/ contains an implementation of the online mode of PDAF with a parallelized model. In this case, both the model and the filter are parallelized. Also, the ensemble integration can be performed in parallel This case can be run e.g. with mpirun -np 18 ./model_pdaf -dim_ens 9 (This example uses 9 model tasks of each 2 processes. The processes of model task 1 are also used to compute the filter.) ---- ONLINE mode - special cases with separate processes for model and filter ---- online_2D_parallelmodel_fullpar/ As online_2D_parallelmodel, this directory contains an implementation of the online mode of PDAF with a parallelized model. This variant uses a particular configuration in which the filter is executed using different processes than the models. Thus all ensemble members are exchanged between the model tasks and the filter processes using MPI communication. This case can be run e.g. with mpirun -np 20 ./model_pdaf -dim_ens 9 (This example uses 9 model tasks of each 2 processes. Another 2 processes are used for the filter.) The routines are identical to those of online_2D_fullpar, except for main_pdaf.F90 and init_parallel_pdaf.F90. In the latter routine, the initialization of the MPI communicators is changed so that the filter uses a set of processes that is distinct from the processes running the model. In main_pdaf.F90 the subroutine calls for the filter processes are separated from those for the model processes to more clearly show which parts are run by which processes. online_2D_parallelmodel_fullpar_1fpe/ As online_2d_parallelmodel, this directory contains an implementation of the online mode of PDAF with a parallelized model. This variant uses a particular configuration in which the filter is executed on a single process which is different from the processes that compute the model integrations. Thus, the number of processes for the filter and each model task are distinct. This case can be run e.g. with mpirun -np 19 ./model_pdaf -dim_ens 9 (This example uses 9 model tasks of each 2 processes. One further processes is used for the filter.) The routines are identical to those of online_2D_fullpar, except for main_pdaf.F90, init_parallel_pdaf.F90, distribute_state_pdaf.F90, and collect_state_pdaf.F90. In init_parallel_pdaf.F90, the initialization of the MPI communicators is changed so that the filter uses a single processes which is distinct from the processes running the model. This setup is analogous to that used in online_2D_parallelmodel_fullpar, except for using a single process here. The routine main_pdaf.F90 is identical to that used in online_2D_parallelmodel_fullpar. As in the configuration the filter uses only a single process, while the model integrations are each parallelized, only the model processes with rank 0 communicate directory with the filter process. To account for this particularity, the routine distribute_state_pdaf.F90 is changed so that it distributes the global state vector over the model processes of each model task so that each model process initialized the field in its local sub-domain. Analogously, the routine collect_state_pdaf.F90 collects a global state vector on each model process with rank 0. ---- 3D-Var tutorials ---- 3dvar/offline_2D_serial/ Adaption of the case offline_2D_serial for 3D-Var schemes 3dvar/online_2D_serialmodel/ Adaption of the case online_2D_serialmodel for 3D-Var schemes 3dvar/online_2D_parallelmodel/ Adaption of the case online_2D_parallelmodel for 3D-Var schemes ---- Testing and verification ---- verification/ This directory contains outputs files from a test run using the script 'runtests.sh'. The script runtests.sh allows one to compile and all tutorial cases. The outputs in verification/ can then be used for comparison. To run the script one has to set ARCH to the correct make.arch file for the computer on which the tests are run (the default is linux_gfortran_openmpi) To test the cases with OpenMP parallelization the make.arch files need to activate OpenMP for the compilation. runtests.sh does set the number of OpenMP threads by itself. ---- Plotting ---- the directory plotting/ contains plot scripts for Python: plot_file.py Plot a field from a specified file Usage: ./plot_field.py <filename> plot_diff.py Plot the difference of two fields (e.g. truth and analysis) Usage: ./plot_field.py <filename1> <filename2> plot_ens.py Plot the time series of ensemble mean and spread for the grid point (i,j) Usage: ./plot_ens.py <i> <j> ---- Further directories ----- Over the years, the interface of PDAF was advanced. The previous interfaces are still usable for backward compatibility. Examples are given for the following cases. Note: While the provious interfaces are still usable we strongly recommend to perform new implementations using the latest interface standard. ---- Implementations using full (PDAF-1) interface (without PDAF-OMI) ----- PDAF1_full_interface When PDAF V1.16 introduced PDAF-OMI (Observation Module Infrastructure), the tutorial implementations above have been updated for PDAF-OMI. The classical implementations, i.e. not using OMI, have been moved into the directory PDAF1_full_interface for reference. This implementation uses the full interface where a user provides all possible call-back routines. One can consider this as an 'expert mode' since this approach can yield the highest compute performance. ---- Implementations using PDAFomi and PDAFlocalomi (final status of PDAF 2.3.1) ----- PDAF2_OMI/ With PDAF V3.0 a new interface was introduced that combines the features of PDAF-OMI and PDAF-local, which were introduced with PDAF V1.1.6 (OMI) and PDAF 2.3 (PDAF-local). With PDAF V3.0 the the tutorial implementations above have been updated to use the PDAF3 interface. The previous implementations using the PDAFomi interface for global DA methods and PDAFlocalomi interface for localized DA methods have been moved into the directory PDAF2/ for reference.