Publishing ACCESS model simulations to ESGF

There are two main phases to prepare an ACCESS simulation for publication on an ESG node: the post-processing and the quality control (QC).
In this phase we extract or calculate, from the raw model output, selected variables as defined by the CMIP5 standard_output table .
To do so we use the post-processor app created by Peter Uhe, CAWCR, this is a python wrapper to the Climate Model Output Rewriter (CMOR). CMOR2 is the current version and can be used to satisfy the specifications for CMIP5 model output.

APP1-0 description

APP1-0 is roughly made up of
  • database/champions – contains all the champions tables the files defining which variables will be calculated, their publishing status, the input variables, input filenames and, if applicable, how they will be calculated. There is a file for each CMIP table, plus a few more for “extra” variables. Fields are: cmip variable, definable in access, implemented in post processor, dimensions, access variable name, access file location, realm, calculation, override units, axes modifier, positive, notes. See at bottom of page for an example.
  • database/experiments.csv – this is the default name for a csv file in which the following attributes/settings are set for each simulation: experiment_id, experiment_directory, start_year, end_year, realization, initialization, physics_version, local_experiment name, model name, forcing, parent_id, parent_rip, branch time, land model, notes
  • database/app.db – this is the default name of a sqlite database that contains a combination of the information provided by the experiments.csv file and the champions tables. You can use the same app.db and experiments.csv file for all your simulations, we normally produced one for each experiment adding _local-exp-name to distinguish them. There is a row for each file that needs to be processed. The “status” field is updated after each processing step: unprocessed, processing, processed, published. And in case of errors: processing_failed.
  • database/ is the python script that produces the app database. Which champions tables are used for different CMIP5 experiments (like historical, amip etc.) are defined here. If a database with the same name exists already it will update it
  • – this is a wrapper to It calls and passes information about a specific simulation. It reads input information from the run app database (app.db default name) and set the version (ex. V20130607).
  • – the main post-processor scripts, it calls CMOR passing to it all the relevant settings. In some of the global attributes (such as title) are directly defined. can call if a calculation needs to be performed to output a variable.
  • – this script has two functions update the version and move the files to the “published” directory, currently /g/data1/p66/CMIP5/published/
This will become /g/data1/ua6/authoritative/IPCC or ACCESS_pre-publishing, at the moment the NCI publisher rsync the p66 dir to this one before starting the actual publishing. The “published” directory follows the official CMIP5 DRS. An example:
  •,, – these are the scripts used to actually submit to the queue an instance of the post-processor and set up the environment to run it
  • /um2netcdf/ - this directory contains the script necessary to convert the UM binary output to netcdf

Set up of the APP1-0

  1. svn checkout
  2. Update the, and files so you are loading the correct environment variables and NCI project id. See at bottom of page for example scripts.
  3. Check that the settings in the champions tables are correct for your simulations. In particular: definable in access should be “yes” if you want to process the variable, implemented in post processor is a string representing the publishing priority and file location completes the input file path in the experiments.csv.
  4. In APP1-0/database there is an example of an experiments.csv file. Update this file with the information relating your simulations, you can use one for all simulation, adding a new row for each simulation, or, as we did, use separate ones for each of them. In particular this is where you set up the first part of the input files path.
  5. Set up the directory where you are storing the input files. Ex.
    • ../../<local-exp>/history/atm/netCDF/<UM output files>
    • ../../<local-exp>/history/ocn/<ocean output files>
    • ../../<local-exp>/history/ice/<sea-ice output files>
  6. Set up the directory where you are storing the output files and the ancillary files. Ex.
    • ../../<local-exp>/CMIP5/job_output/
    • ../../<local-exp>/CMIP5/ancillary_files/ {, grid_spec.auscom_….nc,,, Base-09Ipj.astart-….nc}

Running the APP1-0
  1. source – to load all the necessary modules and setting up the environment variables
  2. python – to create the app database
  3. ./ – will submit to the queue which load the environment and run the python script.
  4. Check the run output and error log files. If there are errors or the process was interrupted and you need to reprocess a variable use the script in /database/ to reset the variable status to “unprocessed”. In case of error the status will be “processing_failed”, in case the job is interrupted before it can update the status, it will be “processing”.

Quality Control

The software to manage the QC component has been developed by DKRZ ( A copy of this is installed on raijin in /g/data1/p66/QC . The software consists of two components:
  • a set of c++ executables and scripts (QC) to do the actual checks (min,mean,max, etc)
  • a wrapper script (QCWrapper) to upload the results to a central postgressDb at DKRZ.

Move to publishing directory
  1. check settings for “outpath” if you’re not using the path set in the database. In our case we commented all the rows where the outpath was read from the app database, because the files have already being moved once to the standard ACCESS output directory. So we have explicitly defined outpath='/projects/p66/pfu599/'
  2. modify the select statement (around line 200 in code) so it select the fields you want to publish (use the priority field Ex. publish1 highest priority, etc. we have also added the publishing date to this field)
  3. run first selecting the update_version option (around line 210) if you need to.
  4. then run selecting the move_to_published option (around line 211), do this the first time as a dry_run, to check everything is ok, and then publish_var. You set these at lines 160-165 tmp=dry_run and tmp=publish_var

Quality Control set up
  1. go to /projects/p66/QC/QCWrapper and create your own running directory
  2. In this subdirectory copy the config file: qc_ACCESS_final.conf, and change
    • contact details and QCBD logfile at the start of the script (lines 18-24)
    • $QCBD_VERSION experiment version/s to process (around lines 30-50)
    • $DATA_ROOT_FS for the experiment, which has to end with the “experiment” subdir. Ex. DATA_ROOT_FS=/projects/p66/CMIP5/published/CMIP5/output1/CSIRO-BOM/ACCESS1-3/historical/ (around lines 50-60)
  3. copy in your own dir , change reference to configuration file (NB --nodb is the “no database” flag so it doesn't submit files to external database, --noqc is the “no quality control” flag). An example is shown at bottom of page.
  4. submit job to queue by running qsub

After QC is finished
  1. Check log file /projects/p66/QC/QCWrapper/<your-subdir>/qcdb.log
  2. Check results under /projects/p66/QC/QCResults.ACCESS/CMIP5/output1/CSIRO-BOM/<<model>>/<<experiment>>/data/<<freq>>/<<realm>>/<<cmip-table>>/<<ensemble>>/<<version>>/<<var>>/
    • qc_<var>_<cmip-table>_<model>_<experiment>_<ensemble>.nc contains stats for the file
    • qc_warning_<var>_<cmip5-table>_<model>_<experiment>_<ensemble>_<period>.txt contains warning messages. You can safely ignore all the warnings about ACCESS1-3/ACCESS1-0 name.
    • tid_<var>_<cmip-table>_<model>_<experiment>_<ensemble>.txt contains the file tracking id and the error code (if 0 no errors)
  3. You can plot the file stats by copying in your area /g/data1/p66/QC/QCWrapper/pxp581/QC_scripts/, this will produce .png files and store them in …/QC_scripts/qc_plots . On raijin you can use display <filename>.png to plot them. You can choose a database_plot option and you set the version of the experiment you want to process or a manual option that allows you to add more constraints to your selection.

Example of APP1-0/

#!/bin/bash -l
date=`date +%F_%H-%M-%S`
cd $HOME/APP1-0
#set up environment
. ./
Exp=testHG3-HT # local experiment name
N=200 # maximum number of files that will be processed
Table=Limon # CMIP table to be processed
#Output files
mkdir -p $dir
#qsub command
qsub -P w97 -q normal -l walltime=8:00:00,mem=16GB -lother=gdata1 -N $name -o $output -e $error -v Table=$Table,Exp=$Exp,N=$N ./
#command to use without environment variables
#qsub -P p66 -q normal -l walltime=8:00:00,mem=3500MB -wd -v ./

Example of APP1-0/

#!/bin/bash -l
cd $HOME/APP1-0
#set up environment
. ./
#Call python script

Example of APP1-0/

#!/bin/bash -l
#Set up environment for post-processor
  1. Global environment
module use ~access/modules
module load netcdf/4.3.0
module load python/2.7.3
module load python/2.7.3-matplotlib
module load pythonlib/cdat-lite/6.0rc2-fixed
module load pythonlib/ScientificPython/2.8
module load pythonlib/cmor
module load udunits
export UDUNITS2_XML_PATH=/apps/udunits/2.1.24/share/udunits/udunits2.xml
export PYTHONPATH=$PYTHONPATH:~/pythonlibs/lib/python2.7/site-packages/
  1. APP environment variables
export APP_OUTPATH=/g/data1/ua8/tmp/testHG3-HT/ #set this to where you want output data and logs to go
export APP_CHAMPIONS_DIR=$HOME/APP1-0/database/champions/
export APP_EXPERIMENTS_TABLE=experiments.csv
export APP_DATABASE=$HOME/APP1-0/database/app.db
export CDAT_LOCATION=~access/apps/pythonlib/cdat-lite/i6.0rc2-fixed/lib/python2.7/site-packages/cdat_lite-6.0rc2-py2.7-linux-x86_64.egg

Example of a champions table: day_limited.csv

#Champions table: day_limited,,,,,,,,,,,
#cmip variable,definable in access,implemented in post processor,dimensions,access variable name,access file location,realm,calculation,override units,dummy file,positive,notes
tslsi,yes,failed,2Datmos,ts ts_sea
prc,yes,pub2013-11,2Datmos,prc1 prc2,/atm/netCDF/*,atmos,var[0]+var[1],,,,
mrro,yes,need_check,2Datmos,mrros smrros,/atm/netCDF/*,land,var[0]+var[1],,,,"STASHmapping: m01s08i234 + m01s08i235, bad units: kg m-2 a-2"
rlus,yes,pub2013-11,2Datmos,rls rlds,/atm/netCDF/*,atmos,(var[0]-var[1])*-1,,,,(m01s02i201-m01s02i207)*-1
rsus,yes,pub2013-11,2Datmos,rss rsds,/atm/netCDF/*,atmos,(var[0]-var[1])*-1,,,,(m01s01i201-m01s01i235)*-1
usi,yes,pub2July,2Docean,uvel_d,/ice/iceh_day.????-??.nc,seaIce,,,,,Implement new namelist following ~pju565/ACCESS/input/cice/cice4.1_in.nml_0layer_cmip5
vsi,yes,pub2July,2Docean,vvel_d,/ice/iceh_day.????-??.nc,seaIce,,,,,Implement new namelist following ~pju565/ACCESS/input/cice/cice4.1_in.nml_0layer_cmip5
sic,yes,pub2July,2Docean,aice_d,/ice/iceh_day.????-??.nc,seaIce,,,,,Implement new namelist following ~pju565/ACCESS/input/cice/cice4.1_in.nml_0layer_cmip5
sit,yes,pub2July,2Docean,hi_d,/ice/iceh_day.????-??.nc,seaIce,,,,,Implement new namelist following ~pju565/ACCESS/input/cice/cice4.1_in.nml_0layer_cmip5

Example of to run QC check

#PBS -l walltime=4:00:00
#PBS -l mem=1900mb
#PBS -j oe
#PBS -l wd
#PBS -o logs
module load python/2.7.3
module load python/2.7.3-matplotlib
module use ~access/modules
export PYTHONPATH=$PYTHONPATH:~pfu599/pythonlibs
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/g/data1/p66/QC/Oracle/instantclient_11_1
python ./src/ --configure=qc_ACCESS.conf --nodb