Run ACCESS-rAM3
About
ACCESS-rAM3 is an ACCESS-NRI-supported configuration of the UK Met Office (UKMO) Regional Nesting Suite for high-resolution regional atmosphere modelling.
A description of the model and its components is available in the ACCESS-rAM3 overview.
ACCESS-rAM3 comprises multiple suites: the Regional Ancillary Suite (RAS) and Ostia Ancillary Suite (OAS) that generate ancillary files (i.e., input files), and the Regional Nesting Suite (RNS) which runs the regional forecast.
The instructions below outline how to run ACCESS-rAM3 using ACCESS-NRI's supported configuration, specifically designed to run on the National Computational Infrastructure (NCI) supercomputer Gadi.
The example experiment within this page focuses on a flood event in Lismore, NSW, using BARRA-R2 land-surface initial conditions. Its configuration is specified in Nesting configuration.
If you are unsure whether ACCESS-rAM3 is the right choice for your experiment, take a look at the overview of ACCESS Models.
All ACCESS-rAM3 configurations are and available on MOSRS via links at the top of this page.
ACCESS-rAM3 release notes are available on the ACCESS-Hive Forum and are updated when new releases are made available.
Prerequisites
-
NCI Account
Before running ACCESS-rAM3, you need to Set Up your NCI Account. -
MOSRS account
The Met Office Science Repository Service (MOSRS) is a server run by the UK Met Office (UKMO) to support collaborative development with other partners organisations. MOSRS contains the source code and configurations for some model components in ACCESS-rAM3 (e.g., the UM).
To apply for a MOSRS account, please contact your local institutional sponsor. -
Join NCI projects
Join the following projects by requesting membership on their respective NCI project pages:Tip
To request membership for the ki32_mosrs subproject, you need to:
- already be member of the ki32 project
- have a MOSRS account
For more information on joining specific NCI projects, refer to How to connect to a project.
-
Connection to an ARE VDI Desktop (optional)
To run ACCESS-rAM3, start an Australian Research Environment (ARE) VDI Desktop session.
If you are not familiar with ARE, check out the Getting Started on ARE section.
Warning
The waiting time to complete some of the above prerequisites may be 2-3 weeks.
Quick Start guide
The following Quick Start guide is aimed at experienced users wanting to run ACCESS-rAM3. For more detailed instructions, please refer to the Detailed guide.
Required setup for running ACCESS-rAM3
-
Start a new persistent session
In a Gadi login node or from an ARE terminal instance run:This will use your default project.persistent-sessions start <name>For further instructions on starting a persistent session, refer to the Detailed guide.
-
Assign the persistent session to Rose/Cylc workflows
Run the following command:substitutingecho "<name>.${USER}.<project>.ps.gadi.nci.org.au" > ~/.persistent-sessions/cylc-session<name>with the name given to your persistent session, and<project>with the project assigned to it.Tip
This step should only be done once
For further instructions on assigning the target persistent session, refer to the Detailed guide.
-
Rose/Cylc setup
To get the required Rose/Cylc setup, run:module use /g/data/hr22/modulefiles module load cylc7For further instructions on getting the Rose/Cylc setup, refer to the Detailed guide.
-
MOSRS authentication
Authenticate using your MOSRS credentials:mosrs-authFor further instructions on MOSRS authentication, refer to the Detailed guide.
Regional Ancillary Suite (RAS)
-
Copy the RAS from UKMO
rosie checkout u-bu503/nci_access_ram3For further instructions on getting the RAS configuration, refer to the Detailed guide.
-
Run the RAS
cd ~/roses/u-bu503 rose suite-runFor further instructions on running the RAS configuration, refer to the Detailed guide.
Ostia Ancillary Suite (OAS)
-
Copy the OAS from UKMO
rosie checkout u-dk517For further instructions on getting the OAS configuration, refer to the Detailed guide.
-
Run the OAS
cd ~/roses/u-dk517 rose suite-runFor further instructions on running the OAS configuration, refer to the Detailed guide.
Regional Nesting Suite (RNS)
-
Copy the RNS from UKMO
rosie checkout u-by395/nci_access_ram3For further instructions on getting the RNS configuration, refer to the Detailed guide.
-
Run the RNS
From within the RNS directory:rose suite-runFor further instructions on getting the RNS configuration, refer to the Detailed guide.
Detailed guide
Set up an ARE VDI Desktop (optional)
Info
If you want to skip this step and run ACCESS-rAM3 from Gadi login node instead, refer directly to the instructions on how to Set up persistent session.
Launch ARE VDI Session
Go to the ARE VDI page and launch a session with the following entries:
-
Walltime (hours) →
5
This is the amount of time the ARE VDI session will stay active for.
ACCESS-rAM3 does not run directly on ARE.
This means that the ARE VDI session only needs to carry out setup steps as well as starting the run itself. -
Queue →
normalbw -
Compute Size →
tiny(1 CPU)
As mentioned above, the ARE VDI session is only needed for setup and startup tasks, which can be easily accomplished with 1 CPU. -
Project → a project of which you are a member.
The project must have allocated Service Units (SU) to run your simulation. Usually, but not always, this corresponds to your$PROJECT.
For more information, refer to Join relevant NCI projects. -
Storage →
gdata/access+gdata/hr22+gdata/ki32+gdata/rt52+gdata/ob53+gdata/cm45+gdata/vk83(minimum)
This is a list of all project data storage, joined by plus (+) signs, needed for the ACCESS-rAM3 simulation. In ARE, storage locations need to be explicitly defined to access data from within a VDI instance.
Every ACCESS-rAM3 simulation can be unique and input data can originate from various sources. Hence, if your simulation requires data stored in project folders other than the ones listed in the minimum storage above, you need to add those projects to the storage path.
For example, if your ACCESS-rAM3 simulation requires data stored in/g/data/<your-project-id>and/scratch/<your-project-id>, the following should be added to the minimum storage above:+gdata/<your-project-id>+scratch/<your-project-id>
Launch the ARE session and, once it starts, click on Launch VDI Desktop.

Open the terminal in the VDI Desktop
Once the new tab opens, you will see a Desktop with a few folders on the left.
To open the terminal, click on the black terminal icon at the top of the window. You should now be connected to a Gadi computing node.

Set up persistent session
To support the use of long-running processes, such as ACCESS model runs, NCI provides a service on Gadi called persistent sessions.
To run ACCESS-rAM3, you need to start a persistent session and set it as the target session for the model run.
Set up SSH-keys (once-only)
Follow the initialization step to accurately set up your ssh keys so you can run the model from outside of the persistent session.
Start a new persistent session
To start a new persistent session, using either a Gadi login node or an ARE terminal instance, run the following command:
persistent-sessions start <name>
This will start a persistent session with the given name that runs under your default project.
If you want to assign a different project to the persistent session, use the option -p:
persistent-sessions start -p <project> <name>
Tip
While the project assigned to a persistent session does not have to be the same as the project used to run the ACCESS-rAM3 configuration, it does need to have allocated Service Units (SU).
For more information, check how to Join relevant NCI projects.
To list all active persistent sessions run:
persistent-sessions list
The label of a newly-created persistent session has the following format:
<name>.<$USER>.<project>.ps.gadi.nci.org.au.
Specify target persistent session
After starting the persistent session, it is essential to assign it to the ACCESS-rAM3 run.
The easiest way to create a file ~/.persistent-sessions/cylc-session that contains the target of the persistent session.
You can do it manually, or by running the following command (by substituting <name> with the name given to the persistent session, and <project> with the project assigned to it):
echo "<name>.${USER}.<project>.ps.gadi.nci.org.au" > ~/.persistent-sessions/cylc-session
For example, if the user abc123 started a persistent session named cylc under the project xy00, the command will be:
For more information on how to specify the target session, refer to Specify Target Session with Cylc7 Suites.
Tip
You can simultaneously submit multiple ACCESS-rAM3 runs using the same persistent session without needing to start a new one. Hence, the process of specifying the target persistent session for ACCESS-rAM3 should only be done once. Then, to run ACCESS-rAM3, you just need to ensure that you have an active persistent session named like the target one you specified above. If the persistent session is not active, simply start one.
Terminate a persistent session
Tip
Logging out of a Gadi login node or an ARE terminal instance will not affect your persistent session.
To stop a persistent session, run:
persistent-sessions kill <persistent-session-uuid>
Warning
When you terminate a persistent session, any model running on that session will stop. Therefore, you should check whether you have any active model runs before terminating a persistent session.
Rose/Cylc/MOSRS setup
To run ACCESS-rAM3, access to multiple software and MOSRS authentication is needed.
Cylc setup
Cylc (pronounced ‘silk’) is a workflow manager that automatically executes tasks according to the model's main cycle script suite.rc. Cylc controls how the job will be run and manages the time steps of each model component. It also monitors all tasks, reporting any errors that may occur.
To get the Cylc setup required to run ACCESS-rAM3, execute the following commands:
module use /g/data/hr22/modulefiles
module load cylc7
Warning
Cylc version >= cylc7/24.03 required.
Also, before loading the Cylc module, make sure to have started a persistent session and have assigned it to the ACCESS-rAM3 workflow. For more information about these steps, refer to Set up persistent session.
Rose setup
Rose is a toolkit that can be used to view, edit, or run an ACCESS modelling suite.
By completing the Cylc setup, also Rose will be automatically available. Hence, no additional step is required.
MOSRS authentication
To authenticate using your MOSRS credentials, run:
mosrs-auth
Warning
This step needs to be done once for each new session (e.g., Gadi login, ARE terminal window)
ACCESS-rAM3 configuration
ACCESS-rAM3 comprises 2 different suites: a Regional Ancillary Suite (RAS) and a Regional Nesting Suite (RNS).
Both suites within ACCESS-rAM3 have a suite-ID in the format u-<suite-name>, where <suite-name> is a unique identifier.
Typically, an existing suite is copied and then edited as needed for a particular experiment.
For more information on ACCESS-rAM3, refer to the ACCESS-rAM3 configuration page.
Info
Many of the following steps appear in both the RAS and RNS. For this reason, these steps will be detailed only within the RAS section below, and subsequenltly linked to within the RNS section.
Regional Ancillary Suite (RAS)
For the domain of interest, the RAS generates a set of ancillary files, such as initial conditions. These ancillary files are then used by the RNS.
The suite-ID of the RAS is u-bu503.
The latest release branch is nci_access_ram3.
Get the RAS configuration
Rosie is an SVN repository wrapper with a set of options specific for ACCESS modelling suites. It is automatically available within the Rose setup.
The RAS configuration can be copied from the MOSRS repository in 2 ways:
Suites are, by default, created in the user's Gadi home directory under ~/roses/<suite-ID>.
This path will be referred to as the suite directory.
The suite directory contains multiple subdirectories and files, including:
app→ directory containing the configuration files for various tasks within the suite.meta→ directory containing the GUI metadata.rose-suite.conf→ main suite configuration file.rose-suite.info→ suite information file.suite.rc→ Cylc control script file (Jinja2 language).
Local-only copy
To create a local copy of the RAS from MOSRS repository, run:
rosie checkout u-bu503/nci_access_ram3
Remote and local copy
To create a new copy of the RAS both locally and remotely in the MOSRS repository, run:
rosie copy u-bu503
<suite-ID> folder is generated within the MOSRS repository and populated with descriptive information about the suite and its initial configuration.
For additional rosie options, run:
rosie help
Run the RAS
ACCESS-rAM3 suites run on Gadi through a PBS job submission.
When a suite runs, its configuration files are copied in /scratch/$PROJECT/$USER/cylc-run/<suite-ID>. A symbolic link to this directory is also created in the $USER's home directory under ~/cylc-run/<suite-ID>.
ACCESS-rAM3 suites comprise several tasks, such as checking out code repositories, compiling and building the different model components, running the model, etc. The workflow of these tasks is controlled by Cylc.
To run the RAS, execute the following command from within your RAS suite directory:
rose suite-run
After the initial tasks are executed, the Cylc GUI will open, where it is possible to view and control the different tasks in the suite as they are run.
Tip
The Cylc GUI can be safely closed without impacting the experiment run.
To open it again, run the following command from within the suite directory:
rose suite-gcontrol
All steps are completed!!
You will be able to check the suite output files after the run successfully completes.
If you get errors or you can't find the outputs, check the suite logs for debugging.
Check suite logs
It is not unusual that new users experience errors and job failures.
When a suite task fails, a red icon appears next to the respective task name in the Cylc GUI.
To investigate the cause of a failure, or to monitor the progress of a suite, it is helpful to look at the suite's log files.
These files can be found in the directory ~/cylc-run/<suite-ID> within a folder named log.<TIMESTAMP>, which is also symlinked as log (referred to as logs folder below). Logs from previous runs of the same suite are archived as compressed files with the naming pattern log.<TIMESTAMP>.tar.gz.
Inside the logs folder, various files and subfolders can be found. The most relevant logs are typically:
Suite execution log
The primary suite execution log resides in ~/cylc-run/<suite-ID>/log/suite/log.
This file contains a chronological record of the suite's run history. Each line is a distinct log entry, generally formatted as <TIMESTAMP> <LOG-TYPE> - [<task-name>.<cylc-cycle-point>] <status>.
Example of a suite execution log file (click to see content)
2025-03-14T04:11:56Z INFO - Suite server: url=https://cylc.$USER$.$PROJECT.ps.gadi.nci.org.au:PORT/ pid=PID
2025-03-14T04:11:56Z INFO - Run: (re)start=0 log=1
2025-03-14T04:11:56Z INFO - Cylc version: 7.9.9
2025-03-14T04:11:56Z INFO - Run mode: live
2025-03-14T04:11:56Z INFO - Initial point: 01000101T0000Z
2025-03-14T04:11:56Z INFO - Final point: 01000328T2359Z
2025-03-14T04:11:56Z INFO - Cold Start 01000101T0000Z
2025-03-14T04:11:56Z INFO - [make_drivers.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:56Z INFO - [install_cold.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:56Z INFO - [make_cice.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:56Z INFO - [make_mom.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:56Z INFO - [install_ancil.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:56Z INFO - [fcm_make_um.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:11:58Z INFO - [fcm_make_um.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [fcm_make_um.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:11:58Z INFO - [install_ancil.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [install_ancil.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:11:58Z INFO - [install_cold.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [install_cold.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:11:58Z INFO - [make_cice.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [make_cice.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:11:58Z INFO - [make_drivers.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [make_drivers.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:11:58Z INFO - [make_mom.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:11:57Z for job(01)
2025-03-14T04:11:58Z INFO - [make_mom.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:12:04Z INFO - [make_drivers.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:02Z for job(01)
2025-03-14T04:12:04Z INFO - [make_drivers.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:12:04Z INFO - [install_cold.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:02Z for job(01)
2025-03-14T04:12:04Z INFO - [install_cold.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:12:04Z INFO - [make_cice.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:02Z for job(01)
2025-03-14T04:12:04Z INFO - [make_cice.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:12:04Z INFO - [make_mom.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:02Z for job(01)
2025-03-14T04:12:04Z INFO - [make_mom.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:12:04Z INFO - [install_ancil.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:02Z for job(01)
2025-03-14T04:12:04Z INFO - [install_ancil.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:12:04Z INFO - [client-command] get_latest_state dm5220@gadi-login-08.gadi.nci.org.au:cylc-gui c842e8fb-017a-47ec-8706-7fef0af6d5f5
2025-03-14T04:12:06Z INFO - [make_drivers.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:12:05Z for job(01)
2025-03-14T04:12:09Z INFO - [make_cice.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:12:07Z for job(01)
2025-03-14T04:12:09Z INFO - [install_ancil.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:12:07Z for job(01)
2025-03-14T04:12:10Z INFO - [make2_cice.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:12:11Z INFO - [make2_cice.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:12:10Z for job(01)
2025-03-14T04:12:11Z INFO - [make2_cice.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:12:14Z INFO - [make_mom.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:12:12Z for job(01)
2025-03-14T04:12:15Z INFO - [make2_mom.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:12:18Z INFO - [make2_mom.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:12:15Z for job(01)
2025-03-14T04:12:18Z INFO - [make2_mom.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:12:37Z INFO - [install_cold.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:12:35Z for job(01)
2025-03-14T04:12:48Z INFO - [make2_mom.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:45Z for job(01)
2025-03-14T04:12:48Z INFO - [make2_mom.01000101T0000Z] -health check settings: execution timeout=PT40M, polling intervals=PT31M,PT2M,PT7M,...
2025-03-14T04:12:59Z INFO - [fcm_make_um.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:57Z for job(01)
2025-03-14T04:12:59Z INFO - [fcm_make_um.01000101T0000Z] -health check settings: execution timeout=PT1H10M, polling intervals=PT1H1M,PT2M,PT7M,...
2025-03-14T04:13:00Z INFO - [make2_cice.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:12:58Z for job(01)
2025-03-14T04:13:00Z INFO - [make2_cice.01000101T0000Z] -health check settings: execution timeout=PT30M, polling intervals=PT21M,PT2M,PT7M,...
2025-03-14T04:15:58Z INFO - [make2_cice.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:15:57Z for job(01)
2025-03-14T04:18:19Z INFO - [make2_mom.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:18:17Z for job(01)
2025-03-14T04:20:10Z INFO - [fcm_make_um.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:20:08Z for job(01)
2025-03-14T04:20:11Z INFO - [fcm_make2_um.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:20:13Z INFO - [fcm_make2_um.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:20:12Z for job(01)
2025-03-14T04:20:13Z INFO - [fcm_make2_um.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:20:37Z INFO - [fcm_make2_um.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:20:35Z for job(01)
2025-03-14T04:20:37Z INFO - [fcm_make2_um.01000101T0000Z] -health check settings: execution timeout=PT1H10M, polling intervals=PT1H1M,PT2M,PT7M,...
2025-03-14T04:37:11Z INFO - [fcm_make2_um.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:37:09Z for job(01)
2025-03-14T04:37:12Z INFO - [recon.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:37:19Z INFO - [recon.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:37:14Z for job(01)
2025-03-14T04:37:19Z INFO - [recon.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:37:53Z INFO - [recon.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:37:52Z for job(01)
2025-03-14T04:37:53Z INFO - [recon.01000101T0000Z] -health check settings: execution timeout=PT30M, polling intervals=PT21M,PT2M,PT7M,...
2025-03-14T04:38:28Z INFO - [recon.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:38:28Z for job(01)
2025-03-14T04:38:29Z INFO - [coupled.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:38:30Z INFO - [coupled.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:38:30Z for job(01)
2025-03-14T04:38:30Z INFO - [coupled.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:42:00Z INFO - [coupled.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:41:59Z for job(01)
2025-03-14T04:42:00Z INFO - [coupled.01000101T0000Z] -health check settings: execution timeout=PT2H10M, polling intervals=PT2H1M,PT2M,PT7M,...
2025-03-14T04:45:28Z INFO - [coupled.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:45:27Z for job(01)
2025-03-14T04:45:29Z INFO - [filemove.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:45:30Z INFO - [filemove.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:45:30Z for job(01)
2025-03-14T04:45:30Z INFO - [filemove.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:46:12Z INFO - [filemove.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:46:11Z for job(01)
2025-03-14T04:46:12Z INFO - [filemove.01000101T0000Z] -health check settings: execution timeout=PT15M, polling intervals=PT6M,PT2M,PT7M,...
2025-03-14T04:46:26Z INFO - [filemove.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:46:25Z for job(01)
2025-03-14T04:46:27Z INFO - [history_postprocess.01000101T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:46:27Z INFO - [coupled.01000201T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:46:28Z INFO - [coupled.01000201T0000Z] status=ready: (internal)submitted at 2025-03-14T04:46:28Z for job(01)
2025-03-14T04:46:28Z INFO - [coupled.01000201T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:46:28Z INFO - [history_postprocess.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:46:28Z for job(01)
2025-03-14T04:46:28Z INFO - [history_postprocess.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:46:58Z INFO - [history_postprocess.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:46:57Z for job(01)
2025-03-14T04:46:58Z INFO - [history_postprocess.01000101T0000Z] -health check settings: execution timeout=PT1H40M, polling intervals=PT1H31M,PT2M,PT7M,...
2025-03-14T04:47:09Z INFO - [coupled.01000201T0000Z] status=submitted: (received)started at 2025-03-14T04:47:09Z for job(01)
2025-03-14T04:47:09Z INFO - [coupled.01000201T0000Z] -health check settings: execution timeout=PT2H10M, polling intervals=PT2H1M,PT2M,PT7M,...
2025-03-14T04:47:11Z INFO - [history_postprocess.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:47:10Z for job(01)
2025-03-14T04:47:12Z INFO - [housekeep.01000101T0000Z] -submit-num=01, owner@host=cylc.dm5220.tm70.ps.gadi.nci.org.au
2025-03-14T04:47:13Z INFO - [housekeep.01000101T0000Z] status=ready: (internal)submitted at 2025-03-14T04:47:13Z for job(01)
2025-03-14T04:47:13Z INFO - [housekeep.01000101T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:47:18Z INFO - [housekeep.01000101T0000Z] status=submitted: (received)started at 2025-03-14T04:47:17Z for job(01)
2025-03-14T04:47:18Z INFO - [housekeep.01000101T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:47:21Z INFO - [housekeep.01000101T0000Z] status=running: (received)succeeded at 2025-03-14T04:47:20Z for job(01)
2025-03-14T04:50:13Z INFO - [coupled.01000201T0000Z] status=running: (received)succeeded at 2025-03-14T04:50:11Z for job(01)
2025-03-14T04:50:14Z INFO - [filemove.01000201T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:50:15Z INFO - [filemove.01000201T0000Z] status=ready: (internal)submitted at 2025-03-14T04:50:14Z for job(01)
2025-03-14T04:50:15Z INFO - [filemove.01000201T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:50:54Z INFO - [filemove.01000201T0000Z] status=submitted: (received)started at 2025-03-14T04:50:52Z for job(01)
2025-03-14T04:50:54Z INFO - [filemove.01000201T0000Z] -health check settings: execution timeout=PT15M, polling intervals=PT6M,PT2M,PT7M,...
2025-03-14T04:51:08Z INFO - [filemove.01000201T0000Z] status=running: (received)succeeded at 2025-03-14T04:51:06Z for job(01)
2025-03-14T04:51:09Z INFO - [history_postprocess.01000201T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:51:09Z INFO - [coupled.01000301T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T04:51:10Z INFO - [coupled.01000301T0000Z] status=ready: (internal)submitted at 2025-03-14T04:51:09Z for job(01)
2025-03-14T04:51:10Z INFO - [coupled.01000301T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:51:10Z INFO - [history_postprocess.01000201T0000Z] status=ready: (internal)submitted at 2025-03-14T04:51:09Z for job(01)
2025-03-14T04:51:10Z INFO - [history_postprocess.01000201T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:51:23Z INFO - [history_postprocess.01000201T0000Z] status=submitted: (received)started at 2025-03-14T04:51:22Z for job(01)
2025-03-14T04:51:23Z INFO - [history_postprocess.01000201T0000Z] -health check settings: execution timeout=PT1H40M, polling intervals=PT1H31M,PT2M,PT7M,...
2025-03-14T04:51:35Z INFO - [history_postprocess.01000201T0000Z] status=running: (received)succeeded at 2025-03-14T04:51:34Z for job(01)
2025-03-14T04:51:36Z INFO - [housekeep.01000201T0000Z] -submit-num=01, owner@host=cylc.dm5220.tm70.ps.gadi.nci.org.au
2025-03-14T04:51:37Z INFO - [housekeep.01000201T0000Z] status=ready: (internal)submitted at 2025-03-14T04:51:37Z for job(01)
2025-03-14T04:51:37Z INFO - [housekeep.01000201T0000Z] -health check settings: submission timeout=P3D
2025-03-14T04:51:41Z INFO - [housekeep.01000201T0000Z] status=submitted: (received)started at 2025-03-14T04:51:40Z for job(01)
2025-03-14T04:51:41Z INFO - [housekeep.01000201T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T04:51:45Z INFO - [housekeep.01000201T0000Z] status=running: (received)succeeded at 2025-03-14T04:51:43Z for job(01)
2025-03-14T05:01:00Z INFO - [coupled.01000301T0000Z] status=submitted: (received)started at 2025-03-14T05:00:59Z for job(01)
2025-03-14T05:01:00Z INFO - [coupled.01000301T0000Z] -health check settings: execution timeout=PT2H10M, polling intervals=PT2H1M,PT2M,PT7M,...
2025-03-14T05:04:40Z INFO - [coupled.01000301T0000Z] status=running: (received)succeeded at 2025-03-14T05:04:39Z for job(01)
2025-03-14T05:04:41Z INFO - [filemove.01000301T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T05:04:43Z INFO - [filemove.01000301T0000Z] status=ready: (internal)submitted at 2025-03-14T05:04:43Z for job(01)
2025-03-14T05:04:43Z INFO - [filemove.01000301T0000Z] -health check settings: submission timeout=P3D
2025-03-14T05:05:25Z INFO - [filemove.01000301T0000Z] status=submitted: (received)started at 2025-03-14T05:05:24Z for job(01)
2025-03-14T05:05:25Z INFO - [filemove.01000301T0000Z] -health check settings: execution timeout=PT15M, polling intervals=PT6M,PT2M,PT7M,...
2025-03-14T05:05:40Z INFO - [filemove.01000301T0000Z] status=running: (received)succeeded at 2025-03-14T05:05:39Z for job(01)
2025-03-14T05:05:41Z INFO - [history_postprocess.01000301T0000Z] -submit-num=01, owner@host=localhost
2025-03-14T05:05:42Z INFO - [history_postprocess.01000301T0000Z] status=ready: (internal)submitted at 2025-03-14T05:05:42Z for job(01)
2025-03-14T05:05:42Z INFO - [history_postprocess.01000301T0000Z] -health check settings: submission timeout=P3D
2025-03-14T05:05:57Z INFO - [history_postprocess.01000301T0000Z] status=submitted: (received)started at 2025-03-14T05:05:56Z for job(01)
2025-03-14T05:05:57Z INFO - [history_postprocess.01000301T0000Z] -health check settings: execution timeout=PT1H40M, polling intervals=PT1H31M,PT2M,PT7M,...
2025-03-14T05:06:08Z INFO - [history_postprocess.01000301T0000Z] status=running: (received)succeeded at 2025-03-14T05:06:07Z for job(01)
2025-03-14T05:06:09Z INFO - [housekeep.01000301T0000Z] -submit-num=01, owner@host=cylc.dm5220.tm70.ps.gadi.nci.org.au
2025-03-14T05:06:10Z INFO - [housekeep.01000301T0000Z] status=ready: (internal)submitted at 2025-03-14T05:06:10Z for job(01)
2025-03-14T05:06:10Z INFO - [housekeep.01000301T0000Z] -health check settings: submission timeout=P3D
2025-03-14T05:06:18Z INFO - [housekeep.01000301T0000Z] status=submitted: (received)started at 2025-03-14T05:06:16Z for job(01)
2025-03-14T05:06:18Z INFO - [housekeep.01000301T0000Z] -health check settings: execution timeout=PT12H
2025-03-14T05:06:20Z INFO - [housekeep.01000301T0000Z] status=running: (received)succeeded at 2025-03-14T05:06:19Z for job(01)
2025-03-14T05:06:20Z INFO - Suite shutting down - AUTOMATIC
2025-03-14T05:06:28Z INFO - DONE
This file helps identify specific tasks that failed during the suite run.
Tip
When a task fails, the LOG-TYPE will typically be ERROR or CRITICAL, instead of the more common INFO.
Once a specific task and Cylc cycle point are identified, the task-specific logs can be inspected.
Task-specific logs
Logs for individual tasks are located in subfolders within the logs folder, following this path structure:
~/cylc-run/<suite-ID>/log/job/<cylc-cycle-point>/<task-name>/<retry-number>
<retry-number> indicates the number of retries for the same task, with the latest retry symlinked to NN. For the RAS, the <cylc-cycle-point> is 1 (because the jobs are run in one cycle. For the OAS and RNS the <cylc-cycle-point> is the date/time of the cycle.
For example, logs for most recent retry of a task named Lismore_d1100_ancil_um_mean_orog at Cylc cycle point 1 can be found in the folder ~/cylc-run/<suite-ID>/log/job/1/Lismore_d1100_ancil_mean_orog/NN.
Within this directory, the job.out and job.err files (representing STDOUT and STDERR, respectively) can be found, along with other related log files.
Tip
Within the Cylc GUI, logs for a specific task can be viewed by right-clicking on the task and selecting the desired log from the View Job Logs (Viewer) menu.
Stop, restart, reload and clean suites
In some cases, you may want to control the running state of a suite.
If your Cylc GUI has been closed and you are unsure whether your suite is still running, you can scan for active suites and reopen the GUI.
To scan for active suites, run:
cylc scan
rose suite-gcontrol
STOP a suite
To shutdown a suite in a safe manner, run the following command from within the suite directory:
rose suite-stop -y
-
Check the status of all your PBS jobs:
qstat -u $USER -
Delete any job related to your run:
qdel <job-ID>
RESTART a suite
There are two main ways to restart a suite:
-
SOFT restart
To reinstall the suite and reopen Cylc in the same state it was prior to being stopped, run the following command from within the suite directory:rose suite-run --restartWarning
You may need to manually trigger failed tasks from the Cylc GUI.
-
HARD restart
To overwrite any previous runs of the suite and start afresh, run the following command from within the suite directory:rose suite-run --newWarning
This will overwrite all existing model output and logs for the same suite.
RELOAD a suite
In some cases, the suite needs to be updated without necessarily having to stop it (e.g., after fixing a typo in a file). Updating an active suite is called a reload, where the suite is re-installed and Cylc is updated with the changes. This is similar to a SOFT restart, except new changes are installed, so you may need to manually trigger failed tasks from the Cylc GUI.
To reload a suite, run the following command from within the suite directory:
rose suite-run --reload
CLEAN a suite
To remove all files and folders created by the suite within the /scratch/$PROJECT/$USER/cylc-run/<suite-ID> directory, run the following command from within the suite directory:
rose suite-clean
Alternatively, you can achieve the same behaviour within a new submission of an experiment, by appending the --new option to the rose suite-run command:
rose suite-run --new
Warning
Cleaning a suite folder will remove any non-archived data (i.e., output files, logs, executables, etc.) associated with the suite.
RAS output files
The RAS output ancillary files can be found in /scratch/$PROJECT/$USER/cylc-run/<suite-ID>/share/data/ancils.
Ancillaries are divided into folders according to each nested region name, and then further separated according to each nest (i.e., Resolution) name. The path of ancillaries for a specific nest (i.e., Resolution) is /scratch/$PROJECT/$USER/cylc-run/<suite-ID>/share/data/ancils/<nested_region_name>/<nest_name>.
The example above has one nested_region_name called Lismore, 1 nest named era5 (outer domain corresponding to Resolution 1), and 2 inner nests (Resolution 2 and Resolution 3) named d1100 and d0198, respectively.
Thus, the ancillary files directory /scratch/$PROJECT/$USER/cylc-run/<suite-ID>/share/data/ancils/ contains the following subdirectories:
Lismore/d1100Lismore/d0198Lismore/era5
Ancillary data files are typically output in the UM fieldsfile format.
OSTIA Ancillary Suite (OAS)
Archived Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) data can be packaged into ancillary files for use in the RNS.
The suite-ID of the OAS is u-dk517.
Get and run OAS configuration
Steps to obtain and run the OAS, as well as monitor logs, are similar to those listed above for the RAS.
The main difference is the suite-ID, which for the OAS is u-dk517.
To get the OAS configuration, follow the steps listed in Get the RAS configuration, making sure you use the correct OAS suite-ID u-dk517 when copying the suite.
To run the OAS configuration, follow the steps listed in Run the suite.
To check the OAS suite logs, follow the steps listed in Check suite logs.
OAS output files
All the OAS output files are available in the OSTIA_OUTPUT directory.
OAS ancillary data files are output in the UM fieldsfile format.
For example, the global ostia ancillary file for the first cycle (20220226T0000Z) of the Lismore experiment can be found in /scratch/$PROJECT/$USER/OSTIA_ANCIL/20220226T0000Z_ostia.anc.
Warning
The RNS updates OSTIA data daily at T0600Z (or T06Z in ISO 8601 time format. If the time of the INITIAL_CYCLE_POINT of your suite is set before T0600Z, you will also need OSTIA ancillary files for the day before the starting day of your suite.
For example, if a suite has the INITIAL_CYCLE_POINT set to 20250612T0000Z (i.e., 12th Jun 2025 at midnight), it will also require the OSTIA ancillary files for the 11th Jun 2025.
Regional Nesting Suite (RNS)
The RNS uses the ancillary files produced by the RAS to run the regional forecast for the domain of interest.
The suite-ID of the RNS is u-by395.
The latest release branch is nci_access_ram3.
Get and run RNS configuration
Steps to obtain and run the RNS, as well as monitor logs, are similar to those listed above for the RAS.
The main difference is the suite-ID, which for the RNS is u-by395.
To get the RNS configuration, follow the steps listed in Get the RAS configuration, making sure you use the correct RNS suite-ID u-by395 when copying the suite.
To run the RNS configuration, follow the steps listed in Run the suite.
To check the RNS suite logs, follow the steps listed in Check suite logs.
RNS output files
All the RNS output files are available in the directory /scratch/$PROJECT/$USER/cylc-run/<suite-ID>. They are also symlinked in ~/cylc-run/<suite-ID>.
The RNS output data can be found in the directory /scratch/$PROJECT/$USER/cycl-run/<suite-ID>/share/cycle, grouped for each cycle.
Within the cycle directory, outputs are divided into multiple nested subdirectories in the format <nested_region_name>/<science_configuration>/<nest_name>, with <nested_region_name> and <nest_name> referring to the respective configurable options. The <science_configuration> is usually GAL9 or RAL3.3, depending on the nest resolution.
Each <nest_name> directory has the following subdirectories:
ics→ initial conditionslbcs→ lateral boundary conditionsum→ model output data
The RNS output data files are in UM fieldsfile format.
For example, the model output data for the first cycle (20220226T0000Z) of the Lismore experiment (Lismore nested_region_name, using a RAL3P3 science_configuration and d0198 as a nest_name) can be found in /scratch/$PROJECT/$USER/cylc-run/<suite-ID>/share/cycle/20220226T0000Z/Lismore/d0198/RAL3P3/um/umnsaa_pa000.
Tip
The output data name format may vary depending on some configuration parameters.
To change which output variables are produced, refer to access-ram3-configs#Model_Outputs
Edit ACCESS-rAM3 configuration
This section describes how to modify the ACCESS-rAM3 configuration.
In general, ACCESS modelling suites can be edited either by directly modifying the configuration files within the suite directory, or by using the Rose GUI.
Warning
Unless you are an experienced user, directly modifying configuration files is usually discouraged to avoid encountering errors.
Rose GUI
To open the Rose GUI, run the following command from within the suite directory:
rose edit &
Tip
The & is optional. It allows the terminal prompt to remain active while running the Rose GUI as a separate process in the background.
Change start date and/or run length
Warning
INITIAL_CYCLE_POINT and FINAL_CYCLE_POINT define all the Cylc cycle points that are set within the experiment run.
The model will always run for a full cycling frequency (1 day) for each Cylc cycle point.
This means, for example, that with INITIAL_CYCLE_POINT set to 20220226T0000Z, and FINAL_CYCLE_POINT set to +P1D (plus 1 day), 2 Cylc cycle points will be set (20220226T0000Z and 20220227T0000Z). Therefore, the model will run for a total of 2 days!
To avoid running the model for longer that desired, we suggest adding -PT1S (minus 1 second) to the relative duration specified in the FINAL_CYCLE_POINT (refer to the example below).
The run length is calculated using the INITIAL_CYCLE_POINT and FINAL_CYCLE_POINT fields.
Both these fields use ISO 8601 date format, with FINAL_CYCLE_POINT also accepting relative ISO 8601 Durations.
For example, to run the experiment for 2 days starting on the 5th April 2000, set INITIAL_CYCLE_POINT to 20000405T0000Z and FINAL_CYCLE_POINT to +P2D-PT1S.
-
OAS
The RNS requires the global OSTIA ancillary files to be available on disk for each day of the run. If a simulation date/time changes such that the required global OSTIA ancillary files are not already present, the OAS must be re-run with the new date/time. The OAS runs in multiple PBS jobs submissions with each job preparing global OSTIA ancillary information for one day. The job scheduler automatically resubmits the suite every chosen cycling frequency until the total run length is reached.To modify these parameters within the Rose GUI, navigate to suite conf → Ostia ancillary Generation Suite → Cycling options. Edit the related field and click the Save button
. -
RNS
The RNS runs in multiple PBS jobs submissions, each one constituting a cycle. The job scheduler automatically resubmits the suite every chosen cycling frequency until the total run length is reached.Warning
The cycling frequency is currently set to
24hours (1 day) and should be left unchanged to avoid errors.
This also means the model will run for a minimum of 1 day.To modify these parameters within the Rose GUI, navigate to suite conf → Nesting Suite → Cycling options. Edit the related field and click the Save button
.
Get Help
If you have questions or need help regarding ACCESS-rAM3, consider creating a topic in the Regional Nesting Suite category of the ACCESS-Hive Forum.
For assistance on how to request help from ACCESS-NRI, follow the guidelines on how to get help.
For more detailed documentation see access-ram3-configs.