An introduction to
GEMDM Logo Environment Canada Canada
Last Update Sept. 14,2005

 
   
RESEARCH Work:
   
VERSIONRel. Date
v_3.3.0 30/05/2007
v_3.2.2 12/16/2005
v_3.2.0 10/22/2004
v_3.1.2 04/27/2004
   
Quick References to:
GEMDM Environment
GEMDM Flowchart
Batch Mode Setup
GEMDM structure
Setup to build GEMDM Absolutes
Making Own Absolutes/Binaries
Batch Mode
Interactive Mode
GEMDM MACROS
Code Standards
LAM
   



Introduction to GEMDM



GEMDM is a Distributed Memory version of GEM


The Distributed Memory (DM) implementation of the GEM model is one whereby a global domain of dimension G_ni x G_nj is split into subdomains of dimension l_ni x l_nj using a regular block partitioning technique. This partitioning is itself based on a user choice of 'Ptopo_npex' number of processors to split G_ni and 'Ptopo_npey' number of processors to split G_nj. This creates an array of subdomains to which we match an array of processors known as a 'processor topology' of (Ptopo_npex x Ptopo_npex). Each processor will compute only on its own local subdomain of dimension l_ni x l_nj.

An example of a processor topology of (2x2) would look like this:

ie: Ptopo_npex=2, Ptopo_npey=2



Note that each PE will see a different value for its own l_ni,l_nj. The l_ni and l_nj values are determined at run-time based on the processor topology and the number of gridpoints for the global domain (G_ni,G_nj).

This DM implementation of GEMDM uses the Message Passing Interface (MPI) library. In this context, there will be n = (Ptopo_npex X Ptopo_npey) exact copies of the main program launched at once at startup. Those are known as the MPI processes to which a PE will typically be assigned. Through a serie of initial communications each PE obtains its rank and its position within the processor topology. Data decomposition can thereafter take place and the computation will start immediately afterwards.

Because of horizontal dependencies inherent to the horizontal stencil of computation, exchange of data between processors will be required. For this reason a HALO communication region surrounds the actual computational region of every subdomain. This region is strictly used by MPI for inter-processor communication. All MPI functionality (primitives) are currently hidden under a special library called RPN_COMM. This library is maintained at RPN and is described in details at...

../../../si/libraries/rpncomm/rpn_comm


KEEP INFORMED ON CHANGES!!

To be kept informed of current developments and problem-solving related to GEMDM, one should subscribe to the "gem" mailing list by sending an e-mail to "Majordomo@cmc.ec.gc.ca" with the line:

subscribe gem

One can follow the progress, changes, and evolutions of GEMDM by referring to the revision documents under VERSION and its release dates


GEMDM ENVIRONMENT

To obtain the proper GEMDM environment at the desired version, one must issue the following command for each new window session:

    . r.sm.dot gem [version]

For example:

    pollux 17% . r.sm.dot gem 3.2.0
    adding Model=GEMDM, Version=3.2.0 component scripts to PATH
    ($ARMNLIB/modeles/GEMDM/v_3.2.0/scripts)
    adding Model=GEMDM, Version=3.2.0 component bin/IRIX64 to PATH
    ($ARMNLIB/modeles/GEMDM/v_3.2.0/bin/IRIX64)

Note that this has added something to your environment variable $PATH and also set up a new environment variable $gem specific to the version and GEMDM model.
    pollux 18% echo $PATH
    /usr/local/ssh/bin:/opt/pgi/linux86/bin:/data/dormrb04/tmpdirs/armnviv/399631791
    /bin:/users/dor/armn/viv/ovbin/IRIX5:/users/dor/armn/viv/ovbin:.:/software/pub/s
    xf90-2.64/bin:/usr/sbin:/usr/bsd:/sbin:/usr/bin:/usr/bin/X11:/usr/freeware/bin:/
    software/host/bin:/software/base/bin:/software/batch/bin:/software/pub/bin:/usr/
    bsd:/software/cmc/bin:/usr/local/env/gnu/bin:/usr/sbin:/sbin:/usr/local/env/armn
    lib/bin:/usr/local/env/armnlib/scripts:/usr/local/env/afsisio/scripts/usr:/usr/l
    ocal/env/afsisio/scripts/op:/usr/local/env/afsisio/programs:/users/dor/armn/viv/
    bin/IRIX5:/users/dor/armn/viv/bin:/usr/etc:/usr/local/env/armnlib/modeles/GEMDM/
    v_3.2.0/scripts:/usr/local/env/armnlib/modeles/GEMDM/v_3.2.0/bin/IRIX64

    pollux 19% echo $gem
    /usr/local/env/armnlib/modeles/GEMDM_shared/v_3.2.0


Now look at what is in $gem:

    pollux 20% cd $gem
    pollux 21% ls
    Makefile_AIX           RCS_DYN_bk20050120/    run_configs/
    Makefile_IRIX64        RCS_DYN_bk20050121/    scripts/
    Makefile_Linux         bin/                   scripts_bk20050120/
    RCS/                   dfiles/                src/
    RCS_4DVAR/             doc/                   src_4d/
    RCS_4DVAR_bk20050121/  lib/
    RCS_DYN/               patches/
    



    The following table describes the relevant elements above:

scripts directory for scripts that are active in GEM environment (ie:Um_launch,Um_runent.sh,runent,runmod,d2z..)
RCS RCS directory for GEMDM decks
RCS_4DVAR RCS directory for 4DVAR decks
src directory containing source code for GEMDM
src_4d directory containing source code for 4DVAR
doc directory containing documentation for:
    configuration files (ie:configexp.dot.cfg,gem_settings.nml,outcfg.out)
    release notes
    patch releases
bin directory containing pre-processing and post-processing binary executables
lib directory for libraries
patch directory for patches after the original release
Makefile_AIX Makefile for Azur
Makefile_IRIX64 Makefile for Pollux
Makefile_Linux Makefile for Linux machines



SETUP TO RUN GEMDM ON BATCH MODE

First, you must create (do not link) directories called 'gem' and 'listings' on your $HOME if they do not exist. The execution directory (EXECDIR) on the execution machine (mach) has the generic form:

    ${HOME}/gem/${mach}/${exp}

The launching scripts will require ${HOME}/gem to already exist. The purpose of this is to force the user to provide space for the execution of the model through a proper link of directory ${HOME}/gem.


For example:
    cd $HOME
    mkdir gem
    mkdir listings
During batch mode, the job will use ${HOME}/gem/${mach}/ to create its execution directory so if ${mach} does not exist, it will create the execution directory at ${HOME}/gem/. "${mach}" should point to a file system that the intended platform can see. It is recommended to create soft links under the directory gem for the platforms where their batch jobs are to be executed.
Example:
    pollux 22% cd $HOME/gem
    pollux 23% ln -s /fs/mrb/02/armn/armnviv azur
    (for azur)
    pollux 24% ln -s /data/dormrb03/armn/armnviv/pollux pollux (for pollux)
    pollux 25% ln -s /data/local2/armn/armnviv/lorentz lorentz (for lorentz)
    pollux 26% ls -al
    drwxr-xr-x 2 armnviv armn 4096 Mar 24 10:25 ./
    drwxr-xr-x 71 armnviv armn 20480 Sep 12 13:05 ../
    lrwxrwxrwx 1 armnviv armn 23 Dec 15 2003 azur -> /fs/mrb/02/armn/armnviv
    lrwxrwxrwx 1 armnviv armn 19 Mar 24 10:25 lorentz -> /data/local/armnviv
    lrwxrwxrwx 1 armnviv armn 34 Sep 29 2003 pollux -> /data/dormrb03/armn/armnviv/pollux
You can also redirect listings to respective platform directories.
Example:
    pollux 27% cd $HOME
    pollux 28% cd listings
    pollux 29% ln -s /fs/mrb/02/armn/armnviv/listings azur
    (for azur)
    pollux 30% ln -s /data/local2/armn/armnviv/lorentz/listings lorentz (for lorentz)



Structure of GEMDM

Look at the structure of data flow in the
gemdmflow.pdf.
Note that there are two absolutes/binaries (executables) needed to run the model GEMDM. One is for the entrance program (gemntr) and the other is the main program (gemdm). The names of these absolutes are related to the platform from where they have been created and they have the following structure:

(a) maingemntr_${ARCH}_${version}.Abs (for the entry program)
(b) maingemdm_${ARCH}_${version}.Abs (for the main program)

${ARCH} represents the OS of your machine (ie: IRIX64 for pollux, AIX for azur, Linux for your PC)
${version} represents the version of GEMDM

ie:
maingemntr_AIX_3.2.0.Abs, maingemdm_AIX_3.2.0.Abs (for azur),
maingemntr_IRIX_3.2.0.Abs, maingemdm_IRIX_3.2.0.Abs, (for pollux),
maingemntr_Linux_3.2.0.Abs, maingemdm_Linux_3.2.0.Abs, (for arxt22),

Note that there are two configuration files read by the absolutes:
    (a)outcfg.out file to control the RPN standard file output from the GEMDM run such as frequency of output, which fields, at what levels, etc.
    (b)gem_settings.nml file contain all the namelists to control the different schemes and parameters for the entry program (gemntr) and the main program (gemdm)
There is a third configuration file not in the GEMDM flowchart because, it is only used in batch mode:
    (c) configexp.dot.cfg file to control where to send the batch run, how much cpu time allowed, which machine to run the job, where to place the stdout (listings),etc.
Please refer to the detailed documentation of each of these files in: $gem/doc
Running GEMDM INTERACTIVELY

Interactive runs can only be done on either POLLUX or LINUX presently.

Create a working directory (preferably on your home) and either create the absolutes or make soft links to existing absolutes in this directory. There are no absolutes provided in the $gem. They must be created or you can use absolutes created by someone else that would be:
    (a) visible on the platform where you want to run GEMDM
    (b) the correct GEMDM version
Here is an example making soft links to these absolutes at your working directory.
ie:
cd gem_320
ln -s /data/dormrb04/armnviv/tst1/maingemntr_IRIX64_3.2.0.Abs maingemntr_IRIX64_3.2.0.Abs
ln -s /data/dormrb04/armnviv/tst1/maingemdm_IRIX64_3.2.0.Abs maingemdm_IRIX64_3.2.0.Abs

Next, you must create 2 subdirectories at the working directory or make soft links if there is limited diskspace:
    pollux 23% cd gem_320 (ie: your working directory)
    pollux 24% mkdir process
    pollux 25% mkdir output
Besides having valid sub-directories output and process, and the "absolutes" that have been generated for the right machine, a copy of the configuration files gem_settings.nml and the file outcfg.out should be placed locally in your working directory. You may obtain a copy of these 2 latter files from $gem/run_configs/dbg1 to try them out.
    pollux 26% . r.sm.dot gem 3.2.0
    pollux 27% cp $gem/run_configs/dbg1/gem_settings.nml .
    pollux 28% cp $gem/run_configs/dbg1/outcfg.out .
    pollux 29% ls
    gem_settings.nml maingemntr_IRIX64_3.2.0.Abs@ output/
    maingemdm_IRIX64_3.2.0.Abs@ outcfg.out process/
If you created the absolutes yourself, you would have additional files and directories such as Makefile, arbre_de_dependance, malibIRIX64, etc

Execute the entry program gemntr first by typing the following script (defined in $gem/scripts):
    pollux 30% runent > out_gemntr
    pollux 31% cd process
    pollux 32% ls

    bm20010920120000-00-00 bm20010920120000-01-01 labfl.bin
    bm20010920120000-00-01 cmclog mpi_nodes.cfg
    bm20010920120000-01-00 geophy.bin output_settings@

The above will use default climato, analysis and geophy files. The script runent contains simply the following line:

    . r.call.dot Um_runent.sh
Here is an example in using specific analysis or climatology files:
    Um_runent.sh -anal myanalysis -climato myclimato
Or you may copy the script "runent" and modify it at your working directory for it to use your specific input files.
    At any time you may modify any default scripts in $gem/scripts which has a suffix of '.sh' This will always take precedence if visible in the working directory for that working directory. If a more global effect is desired, store the modified scripts in your home in the format: $HOME/modeles/GEMDM/v_{version}/scripts
    (Beware that this would apply to anytime you are running under the GEMDM v_{version}
Execute the main program gemdm by typing the following script (defined in $gem/scripts):
    Um_runmod.sh (or 'runmod')
    pollux 33% runmod > out_gempp

The output of a run (Um_runmod.sh) will be found in the sub-directory "output" in the form of RPN std diese(#) grid files where each file represents one of the processors.

    pollux 34% cd output
    pollux 35% ls

    casc/ dm2001092012-00-01_000 dm2001092012-01-01_000
    dm2001092012-00-00_000 dm2001092012-01-00_000
Each file contains a piece of the actual whole grid. Sometimes, output can also be forced to reassemble in "blocked" topology (processors grouped together) thus, larger partial grids but fewer files or, at times, the whole grid. A model run is more efficient in not regrouping the output but there is a limit in how many files can be post-processed at the end. In the case where reassembly is required, execute the following script above the directory output:
    d2z -rep output -nbf 4 (where 4 is the number of PE's used in the run or the number of files to be assembled together)

An example if the RPN std # files are saved in another directory name other than output such as output2, and, the run is Ptopo_npex=1, Ptopo_npey=2:
The following command to obtain RPN standard files would be:

    d2z -rep output2 -nbf 2

The script 'd2z' uses the program 'bemol2000' to reassemble the pieces together.

For visualization or handling of these '#' or 'Z' grid files see in: RPN Utilities for RPN FSTD file manipulation/viewing


Running GEMDM on BATCH Mode

As mentioned in the section GEMDM Environment, one must ensure that the environment variable $gem exists and is the version intended. One was also make sure the Batch Setup has been done before attempting to submit any batch runs: (see Batch Mode Setup)

In your working directory, create a sub-directory (ie:dbg2) and place 3 GEMDM configuration files (configexp.dot.cfg, gem_settings.nml and outcfg.out) into it. You can take a sample set from $gem/run_configs (ie: dbg1 is good)
    pollux 36% cd gem_320(ie: your working directory)
    pollux 37% mkdir dbg2
    pollux 38% cp $gem/run_configs/dbg1/* dbg2
    pollux 39% cd dbg2
    pollux 40% ls
    configexp.dot.cfg gem_settings.nml outcfg.out
    pollux 41% vi configexp.dot.cfg
Refer above in
GEMDM structure for the descriptions of the configuration files. The file configexp.dot.cfg is only applicable for batch runs thus there are several essential parameters to modify for batch runs:
    (1) You must provide a path to find the existing binaries:
      absaddres=/users/dor/armn/armnviv/myabsolutes/v320;
    If it is absent or undefined:
      'absaddres=;'
    then, it will take a copy of the binaries visible at your current working directory
    (ie:maingemntr_IRIX64_3.2.0.Abs,maingemdm_IRIX64_3.2.0.Abs)
    (2) You must indicate which machine the batch run will execute:
      mach=pollux;
    (3) You must indicate the name of the execution directory that it would create:
      exp=mytest1;
    (4)Ensure that the absolutes are visible and compatible for the machine
(Please efer to the documentation in $gem/doc for the other parameters in this configuration file)
Now return to just above the directory dbg2 and submit the job to the machine by typing:
    pollux 42% Um_launch dbg2
If the batch run was defined in configexp.dot.cfg as follows:
    mach=azur;
    exp=hibou;
Then the job will run under $HOME/gem/azur/hibou/ and the listings will appear in $HOME/listings/ or $HOME/listings/azur/ appearing in the following format:
    GEM_hibou_E_1232133.1 (listing for gemntr)
    GEM_hibou_M_1232133.1(listing for gemdm)
There might be additional listings:
    GEM_hibou_1232133_PREPFT_6_1810570.1 (postprocessing)
    GEM_hibou_1232133_FT_6_734642.1 (output file transfer to another machine)
Note what appears in $HOME/gem/azur/hibou/ :
    c1f01p8m 7% cd $HOME/gem/azur/hibou/
    c1f01p8m 8% ls

    gem_settings.nml outcfg.out
    maingemdm_AIX_3.2.0.Abs* output/
    maingemntr_AIX_3.2.0.Abs* process/
    c1f01p8m 9%
This is the same as what you would see in your working directory when you run GEMDM interactively.


Setup to create GEMDM absolutes/binaries

    This setup will enable for you to build binaries at the local working directory
(1) As mentioned in the section GEMDM Environment, one must ensure that the environment variable $gem exists and is correct. Create your working directory for GEMDM at the chosen version if it is not already created: (recommended in your $HOME where there is backup)
    pollux 17% mkdir gem_320
    pollux 18% cd gem_320

(2) "Open" a new experiment using "ouv_exp" in this working directory
    pollux 19% ouv_exp (see: etagere utilities)
    Opening experiment 'base' press RETURN to confirm
    or give the name of the experiment to open
    Just hit return on the query and enter $gem/RCS for the 'RCSPATH'.
    For those who use the 4D-VAR, enter $gem/RCS $gem/RCS_4DVAR for the 'RCSPATH'
    Then hit Ctrl-X to end. (Note that this command creates a hidden file '.exper_cour and a RCS directory)
(3) Now create directories or soft links needed to complete the setup to run GEMDM (for the specified version). For each working platform [ARCH], you will need to create the directory of compiled code (*.o): malib[ARCH] where [ARCH] = Linux, IRIX64, or AIX
    ie: malibIRIX64, malibLinux, malibAIX

    pollux 20% mkdir malibIRIX64
    pollux 21% mkdir malibLinux
    pollux 22% mkdir malibAIX
This completes the setup for the working directory where you can create GEMDM binaries.

To avoid limited disk space problems:
It is highly recommended to create soft links instead of creating the actual directories for malibIRIX64, malibLinux, malibAIX. They are directories to hold future object files created from recompiling modified routines (by you) from the GEMDM library. It is also recommended to create soft links for the GEM binaries as they are also written in the working directory. An example of a suggest setup of directories and binaries for the Azur (AIX) machine:
    rmdir malibAIX
    mkdir /fs/mrb/02/armn/armnviv/v3.2.0/malibAIX
    ln -s /fs/mrb/02/armn/armnviv/v3.2.0/malibAIX malibAIX
    ln -s /fs/mrb/02/armn/armnviv/v3.2.0/maingemntr_AIX_3.2.0.Abs maingemntr_AIX_3.2.0.Abs
    ln -s /fs/mrb/02/armn/armnviv/v3.2.0/maingemdm_AIX_3.2.0.Abs maingemdm_AIX_3.2.0.Abs
NOTE: Whenever you create the binaries/absolutes, you must always have the "$gem" defined (see: GEMDM Environment) and you must also be in the working directory that has been setup as directed above.

Creating GEM absolutes(binaries) with no code modifications

Refer to the Setup to create binaries before continuing.
In your working directory, you will need to first create the Makefile with:

Note that the Makefile should be recreated from time to time to ensure coherence with changes made in the working directory.

You can create both absolutes (executables) with the target "gem" of the Makefile that would be valid for the current platform with:
    make gem

( in POLLUX, you would obtain maingemntrIRIX64_3.2.0.Abs, maingemdmIRIX64_3.2.0.Abs )

or, you can create only the entry program with:

    make gemntr

( in AZUR, you would obtain maingemntrAIX_3.2.0.Abs )

or, you can create just the main program with:

    make gemdm

( in Linux, you would obtain maingemdmLinux_3.2.0.Abs )

or if a non-MPI binary is desired, create the binaries using these commands:

    make gem_nompi
(which is equivalent to:)
    make gemntr_nompi; make gemdm_nompi
or if a 4d-var gemdm (main) binary is desired, create it using:
    make gem4d

Creating GEM absolutes(binaries) WITH code modifications

Refer to the Setup to create binaries before continuing.

In your working directory, you will need to first create the Makefile with:

Note that the Makefile should be recreated from time to time to ensure coherence with changes made in the working directory.

To extract the routines for modification from the RCS, type the following command:
    (Example given for 'rhs.ftn')
    pollux 1% omd_exp rhs.ftn (see etagere utilities for 'omd_exp')
    extraction of version of module rhs.ftn
    from directory /usr/local/env/armnlib/modeles/GEMDM_shared/v_3.2.0/RCS
Then modify these routines with your preferred editor (vi,emacs,pico,etc)
    pollux 2% vi rhs.ftn
Make the object file:
    pollux 4% make rhs.o
(This will place the '*.o' into malibIRIX64 because we are compiling on POLLUX)
If you are modifying a comdeck:
    pollux 5% omd_exp rhsc.cdk (Example given for rhsc.cdk)
    pollux 6% vi rhsc.cdk
Then you must update the Makefile before creating the object files:
    pollux 7% r.make_exp
Cleanup and remove files from malibIRIX64:
    pollux 8% make clean; rm malibIRIX64/*.o
Then make the object files:
    pollux 9% make objloc
Then after the compilations, rebuild the absolute as shown above in Creating GEM absolutes with no modifications":
    make gem

For further details in compilations and building absolutes, see documentation on r.compile, r.build at RPN utilities for code compilation

Note that comparisons of your modifications can be made easily to the original source code (found in $gem/src/).


GEMDM Code Convention

The routines, functions and comdecks are named to group towards related operations. Please see the naming and coding conventions of routines and variables specified in this document: code_stds_gemdm.html
Here is a table (not necessarily complete) that shows the prefix names or keywords that relate to the functionality of the routines/functions:

PREFIX/KEYWORDFUNCTION RELATION
e_*entry program (e_gemntr)
p_*physics interface
c_*coupling interface (incomplete)
v4d_*4DVar general interface
*_ad4DVar: Adjoint
*_tl4DVar: Linear tangent
*_tr4DVar: Trajectory control
adw_*semi-Lagrangian advection
bac_*back substitution
bloc*,*slab*,*sor*,writ*output related
chem_*chemistry interface (incomplete)
hspng_*horizontal sponge
hzd_*horizontal diffusion
nest_*nesting mechanism for LAM
nli_*non-linear portion of reduced set of eqns
pre_*add metric corrections to R.H.S. of eqns
rhs_*right-hand side (R.H.S.)
set_*setup routines
sol_*solver
tr_*tracers
vspng_*vertical sponge
vte_*vertical interpolation








GEMDM MACRO CODE



An example of a processor topology of (2x1) would look like this:

ie: Ptopo_npex=2, Ptopo_npey=1




The diagram below gives a more detailed layout of the domain decomposition of a (23 x 12) global problem size on a (2x1) processor topology:



Note that arrays with halos are formally shaped the following way:
field(l_minx:l_maxx, l_miny:l_maxy,l_nk)
or to shorten the declaration of these arrays, a macro called LDIST_SHAPE is also very often used:
field(LDIST_SHAPE,l_nk)

The macro LDIST_SHAPE is expanded at the pre-processor stage before compilation. There are other macros and they are defined in the "#include < model_macros_f.h >" statement. The most common ones used in the code are listed here:
MACRO
EXPANSION
LDIST_SIZ (l_maxx - l_minx+1)*(l_maxy - l_miny+1)
LDIST_DIM l_minx,l_maxx,l_miny,l_maxy
DIST_SHAPE Minx:Maxx,Miny:Maxy
DIST_SIZ (maxx - minx+1)*(maxy - miny+1)
DIST_DIM minx,maxx,miny,maxy
DCL_DYNVAR(Hzd, xp0_8,real*8 ,( HZD_MAX)) real*8 Hzd_xp0_8( HZD_MAX)
pointer( Hzd_xp0_8_ , HZD_MAX)
common/ Hzd / Hzd_xp0_8_
MARK_COMMON_BEG(Abc) integer Abc_first(-1:0)
common / Abc / Abc_first
MARK_COMMON_END(Abc) integer Abc_last
common / Abc / Abc_last

Note: the macros can be found at $ARMNLIB/include/rpnmacros.h
Macros can be advantageous in some sense. They provide less syntax or typo errors and they take less space. They are disadvantageous in another sense. They are not the common programming language which makes the code difficult to understand at first glance. A full expansion of the code can be found in the decks "*.f"


GEM in LAM CONFIGURATION

For those who want to try GEM in a Limited Area Modelling LAM configuration, here is an example of a LAM grid definition. Complete details of the namelist variables are described in $gem/doc/gem_settings.nml.txt


For more information on LAM, refer to: lam seminar

authors: V.Lee, M.Desgagné (March 2003)
revision: V.Lee (Sept 2005)