A little introduction on how to start running GEMCLIM (GEM in
climate mode)
To run GEM one needs appropriate executables as well as a set of
scripts and
configuration files.
Let's start by assuming that
- the model version is v_3.2.2,
- most file transfers are initiated from a machine called pollux,
- the multi-node machine you want to run on is called maia,
- the directory /fs/mrb/03/armn/armnkwi/ exist on maia and
- maia runs under a
version of the IBM AIX OS
1) Create the directories 'gem', 'gemclim' and 'listings'
You will have to create a few directories before you can start
running the model.
Please create the directories where the variables
'xfer' and 'outrep' from your 'configexp.dot.cfg' point to.
You also need to create the directories 'gem' and 'listings' directly
in your ${HOME}. In each of these two directories you need to create
links with machine names which point to subdirectories in a place where
you have enought room. In 'gem' you only need to create a directory for
the machine on which you run the model. In 'listings' you need to
create directories for the machines on which you run the entry, the
model and the post processing as well one for pollux (pollux will do
the transfers of the files).
For example:
cd $HOME
mkdir gem
mkdir listings
cd ${HOME}/gem
ln -s
/fs/mrb/03/armn/armnkwi/maia maia
cd ${HOME}/listings
ln -s
/fs/mrb/03/armn/armnkwi/listings maia
ln -s
/data/dormrb03/listings/armnkwi pollux
${HOME}/gem/maia: this is the
execution directory where the model is
being run
${HOME}/listings/maia: this is
where the model and part of the script
listings go
${HOME}/listings/pollux: pollux is handling the transfer of the model
output from maia to the machine on which you process or store your
data. The listings of the transferes go here.
At the end of each job all listings get zipped and transfered to arch_mach in your archdir as specified in
your 'configexp.dot.cfg'.
Since you will be running in climate mode you will have to create
another link:
cd $HOME
ln -s gem gemclim
2) Export RDIAG and EDITFST in your '.profile_usr'
Please export these two variables in you '.profile_usr':
export
RDIAG=/usr/local/env/armnlib/scripts/latest.r.diag
export
EDITFST=editfst_6.02
3) The executables
Usually GEM needs two absoluts: one for the entry ('...ntr...') and one
for the model ('...dm...').
Their names are:
'maingemclimntr_architecture_modelversion.Abs' i.e.
'maingemclimntr_AIX_3.2.2.Abs' for the entry,
'maingemclimdm_architecture_modelversion.Abs' i.e.
'maingemclimdm_AIX_3.2.2.Abs' for the model.
To find out about how
to create the absoluts you can either have a look at the general introduction
to GEMDM web page or follow the little example a few lines
below.
In climate mode the absoluts always need to be on the machine from
which the model gets started.
If the model is run in LAM
mode the entry is usually ran on another machine with another
architecture than the model. In this case there need to be an absolut
for the entry with the
architecture of the entry machine and an absolut for entry and model with the
architecture of the machine were the model will be running. All three absoluts have to be on the machine
from which the model will be launched.
Example about how to
create your own executables for climate version 3.2.2 and AIX:
Go on the machine for which you want to create the absoluts into the
directory in which you would have your own decks (preferably
under your ${HOME}). We assume that ~armnkwi exists and already has a
working setup in your environment.
Create some directories and files:
mkdir RCS
cp
~armnkwi/gem/v_3.2.2_clim/.exper_cour .
If you want to run on AIX you now have to create the absoluts on a
machine running under this OS. We have already assumed that maia is the name of such a machine.
First create some links:
mkdir -p
~armnkwi/maia/ABS/v_3.2.2/malibAIX
ln -s
~armnkwi/maia/ABS/v_3.2.2/malibAIX malibAIX
ln -s
~armnkwi/maia/ABS/v_3.2.2/maingemclimdm_AIX_3.2.2.Abs
maingemclimdm_AIX_3.2.2.Abs
ln -s
~armnkwi/maia/ABS/v_3.2.2/maingemclimntr_AIX_3.2.2.Abs
maingemclimntr_AIX_3.2.2.Abs
Set the model environment
. r.sm.dot gemclim 3.2.2
Create the make file and the 'arbre_de_dependance'
r.make_exp
Compile the model
( 'make gemclim' will create the entry AND the
model
'make gemclimntr' will create only the entry
'make gemclimdm' will create only the model )
make gemclim
If you want to run the model on 'maia' or 'naos' you must copy the two
absoluts there
rcp maingemclim*_AIX_3.2.2.Abs
maia:maia/ABS/v_3.2.2
Modify some decks:
You will find all the RCS model decks in ${gemclim}/RCS_DYN and
${gemclim}/RCS_PHY
To get source decks from the code repository simply type:
omd_exp function.ftn
resp.
omd_exp comdeck.cdk
If you create/include a new comdeck, create/call a new function you
need to recreate the 'arbre_de_dependance' with
r.make_exp
If you want to compile only one or a few decks you would type:
make function.o
If you want to compile all decks in your directory or if you changed a
comdeck you simply use:
make objloc
After having compiled a function or comdeck you will have all their
decks and '*.f' in your directory. They all will have the permission
'-r--r--r--'. To erase them again you can use:
make clean
This will erase all decks with the permission '-r--r--r--'. So be
careful!!! If you write-protect a deck to not accitentally erase
it 'make clean' WILL ERASE it!
4) The scripts
Usually can use the scripts from the environement. This is always
a closed version.
You might also want to ask either Bernard Dugas
(Bernard.Dugas@ec.gc.ca) or Katja Winger (Katja.Winger@ec.gc.ca) if
there a newer updated version of the scripts.
There are two types of scripts:
- version-depedant, that change with model versions:
${gemclim}/scripts/* and
- version-invariant, that do not change:
${ARMNLIB}/scripts/Climat_*
In case you want to modify and use some of these scripts you must
put your (modified) copies
of any version-dependant scripts into the ~/modeles/GEMCLIM/modelversion/scripts directory and
any version-invariant scripts into the ~/ovbin directory.
Every time you start the model or any of the scripts you have
to set the environment to the model version with i.e.:
. r.sm.dot gemclim 3.2.2
Note that this adds two (resp. three or four if you have scripts or bin
directorie under
~/modeles/GEMCLIM/v_3.2.2/) directories to your environment
variable
${PATH} and also sets up a new environment variable ${gemclim} which
points
to the directory under which you find the official model environment
(source decks, scripts, etc.)
5) The configuration files
GEMCLIM needs three configuration files: 'configexp.dot.cfg',
'gemclim_settings.nml' and 'outcfg.out'
configexp.dot.cfg: file to
control where to send the batch run, how
much cpu time allowed, which machine to run the job, which model
version to use, which machine to do the post processing, where to place
the stdout (listings) and the model output, etc.
gemclim_settings.nml: This file
contains all the namelists to control the
model grid, the different schemes and parameters for the entry program
(gemclimntr) and the main program (gemclimdm)
outcfg.out: file to control the
RPN standard file output from the run
such as frequency of output, which fields, at what levels, etc.
If you have set 'strip_phy=1;' in your 'configexp.dot.cfg' this file
will get created automatically otherswise you have to provide it for
the first job. In any case the first and last time step for the output
('steps=#,step,<start,end,inc>';) will get updated
automatically for following jobs.
A description of the general variables of these three configuration
files
can be found on the general introduction
to GEMDM web page (klick on the version
number in the upper left corner). A description of the climate
variables can be found on the main GEM climate web page.
6) How to start the model
- If you run the model in LAM mode you need to start it from the machine on which you want
to do the entry otherwise go on the machine
on which you want to run the model.
- Go in the directory
where you
have your config files 'configexp.dot.cfg' and 'gemclim_settings.nml'.
- Set the model environment with:
. r.sm.dot gemclim
3.2.2
- Launch the model by
simply typing:
Um_launch .
7) What happens while the model is running
- Messages will appear on your screen telling you that the config
files
got updated, that the executables got copied in your execution
directory
on maia (${HOME}/gem/maia/${exp}) and finally that the entry was
submitted.
- At that point you should see the entry running (or being queued).
The
entry runs on only 1 cpu and takes only a few minutes if the model
is
not run in LAM mode;
but it can take up to 15 min for a 100x150 grid per month in LAM mode.
- After the entry, the model
job
will be started on maia
usually running on a multiple of 8 cpu's.
- The model output will be written to the directory '${HOME}/gem/maia/${exp}/output/current_last_step'.
Each process/tile will write to its own sub-directory, 00-00, 00-01,
... The domain geometry (actual tile number and configuration)
is specified in
the in the namelist 'ptopo' of 'gemclim_settings.nml'.
There are three types of output files:
dp... : dynamics on pressure
level
dm... : dynamics on model levels
pm... : physics on model levels
- Every 'cleanup' days
(this
number is specified by the variable
'climat_cleanup'
in your 'configexp.dot.cfg') the post
processing gets
started. This means that the model output will get moved from the
output-directory '.../output/current_last_step' into
'.../output/last_step_xxx' (actually the directory just gets renamed).
- When the post processing is done on another machine (${mach} not
equal ${xfer}) a job
(..._FT_...) will be send to pollux wich moves the files from ${mach}
to ${xfer}.
- Afterwards the post processing job (..._PP_...) gets
launched on ${xfer}. This
is a little 1 cpu
job which determines what kind of post processing is to be done, i.e.
saving of analysis or pilot files, the usual post processing,
diagnostics. If no file transfer, saving of analysis or pilot files
needs to be done, this job will be run as part of the model job.
- If we are still running on maia
at this point, the reassemble job will start using 8 cpu's. This will
first put all tiles together (using d2z) and put the output into a
sub-directory in your
post processing working directory (specified by the variable 'xfer' in your
'configexp.dot.cfg') called '${exp}/WORKING_ON_last_step_xxx_files'. If
you have set 'strip_phy=1;'
in your
'configexp.dot.cfg', it will re-organize
the output: The 3-D physic fields will be interpolated
from
model to pressure levels, a few new variables will get calculated and
in
the end there will be only two types of files:
md... : model levels (only 2-D
fields)
pr... : pressure levels
all of which will be put one
directory up into ${xfer}/${exp}.
- If you have not set 'strip_phy=1;'
your three types of output files will also get moved in the directory
${xfer}/${exp}.
- If you have set 'diagnos=1;'
in your 'configexp.dot.cfg', the diagnostics
will start in ${xfer},
again on
8 cpu's. But this jobs is only launched after the model has run for the
requested (interval) number
of months, as specified in your 'configexp.dot.cfg' and after the
last
post processing job has finished. The diagnostic
job creates the monthly means, time series,
variances and covariances for each individual months in this
interval.
- After this step, the diagnostic files as
well as the md- and pr-files will be archived and transfered to 'arch_mach' into the
directory 'archdir',
again specified in your
'configexp.dot.cfg'.
- At the end of the diagnostics, all listings and jobs/scripts
(created during the run) from this job are collected and archived
in the same
directory.
- At the end of each model job, the restart files
will get archived and
transfered to 'arch_mach'
into 'archdir'.
The name of the archive will be '${exp}step${laststep}.ca.gz'. If the
run continues,
the "old" restart files will get erased from ${mach}. If the run is
finished, they will stay on ${mach}.
Note the model does not depend on the
post processing and diagnostics. So even if one of these jobs
aborts, the model will continue running and at the end of a job (in
case the run continues) submit the
next job automatically.
You will then get another directory in your ${HOME}/gem/${mach} for the
new
job.
Author: Katja Winger
Last update: December 2006