Regional Ocean Modeling System (ROMS)

The Regional Ocean Modeling System (**ROMS**) framework diagram is shown above.
It illustrates various computational pathways: standalone or coupled to
atmospheric and/or wave models. It follows the Earth System Modeling Framework
(ESMF) conventions for model coupling: initialize,
run and finalize. The
dynamical kernel of **ROMS** is comprised of four separate models including the
nonlinear (NLM), tangent linear (TLM),
representer tangent linear (RPM), and adjoint (ADM).
There are several drivers to run each model (NLM,
TLM, RPM, and ADM) separately
and together. The drivers shown in the propagator group are used
for Generalized Stability Theory (GST) analysis (Moore et al., 2004) to study
the dynamics, sensitivity, and stability of ocean circulations to naturally
occurring perturbations, errors or uncertainties in forecasting system, and
adaptive sampling. The driver for adjoint sensitivities (ADSEN) computes the
response of a chosen function of the model circulation to variations in all
physical attributes of the system
(Moore et al., 2006).
It includes drivers for
strong (S4DVAR, IS4DVAR) and weak
(W4DVAR) constraint variational data
assimilation (Arango et al., 2006; Di Lorenzo et al., 2006). A driver for
ensemble prediction is available to perturb forcing and/or initial conditions
along the most unstable directions of the state space using singular vectors.
Finally, several drivers are included in the sanity check group to test the
accuracy and correctness of TLM, RPM, and
ADM algorithms.

**ROMS** is a free-surface, terrain-following, primitive equations ocean model
widely used by the scientific community for a diverse range of applications
(e.g., Haidvogel et al., 2000;
Marchesiello et al., 2003;
Peliz et al., 2003;
Di Lorenzo, 2003;
Dinniman et al., 2003;
Budgell, 2005;
Warner et al., 2005a,
b;
Wilkin et al., 2005).
The algorithms that comprise **ROMS** computational
nonlinear kernel are described in detail in Shchepetkin and McWilliams
(2003,
2005),
and the tangent linear and adjoint kernels and platforms are described
in Moore et al. (2004).
**ROMS** includes accurate and efficient physical and
numerical algorithms and several coupled models for biogeochemical, bio-optical,
sediment, and sea ice applications. The sea ice model is described in
Budgell (2005).
It also includes several vertical mixing schemes
(Warner et al., 2005a),
multiple levels of nesting and composed grids.

For computational economy, the hydrostatic primitive equations for momentum are solved using a split-explicit time-stepping scheme which requires special treatment and coupling between barotropic (fast) and baroclinic (slow) modes. A finite number of barotropic time steps, within each baroclinic step, are carried out to evolve the free-surface and vertically integrated momentum equations. In order to avoid the errors associated with the aliasing of frequencies resolved by the barotropic steps but unresolved by the baroclinic step, the barotropic fields are time averaged before they replace those values obtained with a longer baroclinic step. A cosine-shape time filter, centered at the new time level, is used for the averaging of the barotropic fields (Shchepetkin and McWilliams, 2005). In addition, the separated time-stepping is constrained to maintain exactly both volume conservation and consistancy preservation properties which are needed for the tracer equations (Shchepetkin and McWilliams, 2005). Currently, all 2D and 3D equations are time-discretized using a third-order accurate predictor (Leap-Frog) and corrector (Adams-Molton) time-stepping algorithm which is very robust and stable. The enhanced stability of the scheme allows larger time steps, by a factor of about four, which more than offsets the increased cost of the predictor-corrector algorithm.

In the vertical, the primitive equations are discretized over variable topography
using stretched terrain-following coordinates
(Song and Haidvogel, 1994). The
stretched coordinates allow increased resolution in areas of interest, such as
thermocline and bottom boundary layers. The default stencil uses centered,
second-order finite differences on a staggered vertical grid. Options for higher
order stencil are available via a conservative, parabolic spline reconstruction
of vertical derivatives
(Shchepetkin and McWilliams, 2005).
This class of model
exhibits stronger sensitivity to topography which results in pressure gradient
errors. These errors arise due to splitting of the pressure gradient term into an
along-sigma component and a hydrostatic correction (for details, see
Haidvogel and Beckmann, 1999).
The numerical algorithm in **ROMS** is designed to reduce
such errors (Shchepetkin and McWilliams, 2003).

In the horizontal, the primitive equations are evaluated using boundary-fitted, orthogonal curvilinear coordinates on a staggered Arakawa C-grid. The general formulation of curvilinear coordinates includes both Cartesian (constant metrics) and spherical (variable metrics) coordinates. Coastal boundaries can also be specified as a finite-discretized grid via land/sea masking. As in the vertical, the horizontal stencil utilizes a centered, second-order finite differences. However, the code is designed to make the implementation of higher order stencils easily.

**ROMS** has various options for advection schemes: second- and forth-order
centered differences; and third-order, upstream biased. The later scheme is the
model default and it has a velocity-dependent hyper-diffusion dissipation as the
dominant truncation error
(Shchepetkin and McWilliams, 1998). These schemes are
stable for the predictor-corrector methodology of the model. In addition, there
is an option for conservative parabolic spline representation of vertical
advection which has dispersion properties similar to an eight-order accurate
conventional scheme.

There are several subgrid-scale parameterizations in **ROMS**. The horizontal
mixing of momentum and tracers can be along vertical levels, geopotential
(constant depth) surfaces, or isopycnic (constant density) surfaces. The mixing
operator can be harmonic (3-point stencil) or biharmonic (5-point stencil). See
Haidvogel and Beckmann (1999) for an overview of all these operators.

The vertical mixing parameterization in **ROMS** can be either by local or
nonlocal closure schemes. The local closure schemes are based on the level 2.5
turbulent kinetic energy equations by
Mellor and Yamada (1982) and the Generic
Length Scale (GLS) parameterization
(Umlauf and Burchard, 2003).
The nonlocal closure scheme is based on the K-profile, boundary layer formulation by
Large et al. (1994).
The K-profile scheme has been expanded to include both surface and
bottom oceanic boundary layers. The GLS is a two-equation turbulence model that
allows a wide range of vertical mixing closures, including the popular k-kl
(Mellor-Yamada level 2.5), k-e, and k-w schemes. Several stability functions
(Galperin et al., 1988;
Kantha and Clayson, 1994;
Canuto et al., 2001) have been
also added to provide further flexibility. A recent study
(Warner et al., 2005a)
evaluated the performance of these turbulence closures in **ROMS** in terms of
idealized sediment transport applications. In addition, there is a wave/current
bed boundary layer scheme that provides the bottom stress
(Styles and Glenn, 2000)
and sediment transport which become important in coastal applications.

Currently, the air-sea interaction boundary layer in **ROMS** is based on the
bulk parameterization of
Fairall et al. (1996).
It was adapted from the
**COARE** (Coupled Ocean-Atmosphere Response Experiment) algorithm for the
computation of surface fluxes of momentum, sensible heat, and latent heat. This
boundary layer is used for one or two-way coupling with atmospheric models.

**ROMS** is a very modern code and uses C-preprocessing to activate the
various physical and numerical options. The code can be run in either serial or
parallel computers. The code uses a coarse-grained parallelization paradigm
which partitions the computational 3D grid into tiles. Each tile is then operated
on by different parallel threads. Originally, the code was designed for
shared-memory computer architectures and the parallel compiler-dependent
directives (OpenMP Standard) are placed only in the main computational routine
of the code. An MPI version of the code has been developed so both shared and distributed-memory
paradigms coexist together in a single code.

**ROMS** is a very modern and modular code written in F90/F95. It uses
C-preprocessing to activate the various physical and numerical options. Several
coding standards have been established to facilitate model readability,
maintenance, and portability. All the state model variables are dynamically
allocated and passed as arguments to the computational routines via de-referenced
pointer structures. All private or scratch arrays are automatic; their size is
determined when the procedure is entered. This code structure facilitates
computations over nested and composed grids. The parallel framework is
coarse-grained with both shared- and distributed-memory paradigms coexisting in
the same code. The shared-memory option follows OpenMP 2.0 standard.
**ROMS** has a generic distributed-memory interface that facilitates the
use of several message passage protocols. Currently, the data exchange between
nodes is done with MPI. However, other protocols like MPI2, SHMEM, and others
can be coded without much effort.

**ROMS** has extensive pre and post-processing software for data preparation,
analysis, plotting, and visualization. The entire input and output data structure
of the model is via NetCDF which facilitates the interchange of data between
computers, user community, and other independent analysis software.