Custom Query (964 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (94 - 96 of 964)

Ticket Owner Reporter Resolution Summary
#126 arango arango Done Time-variable 4DVAR surface forcing adjustment
Description

The 4DVAR surface forcing adjustment was revised to include minimization at other times in addition to initialization. A new input parameter (nSFF) was introduced to allow minimization at other times during strong constraint data assimilation. Here, nSFF is the number of time-steps between adjustment of surface fields. This required several changes to the IO NetCDF files involved in 4DVAR data assimilation. All the changes are internal to ROMS.

Several new arrays with an additional dimension (nFrec) are introduced during the minimization. The number of forcing record to process are:

nFrec = 1 + ntimes / nSFF      (integer operation)

nSFF is either a multiple of ntimes or greater than ntimes. Therefore, there are three posibilities:

(1) If  NSFF > NTIMES,       nFrec = 1  (constant adjustment)
(2) If  NSFF = NTIMES,       nFrec = 2  (initial and final)
(3) If  NSFF < NTIMES,       nFrec > 2  (adjustment every NSFF steps)

We will continue looking and testing this capability.

#127 arango arango Done Screening for unphysical currents and density in diag.F
Description

I added the capability to screen for large and unphysical values for ocean currents and density anomaly in diagnostics file diag.F. Two new variables were added to mod_scalars.F:

        real(r8) :: max_speed = 20.0_r8         ! m/s
        real(r8) :: max_rho = 200.0_r8          ! kg/m3

containing the threshold values that will trigger the model to stop (exit_flag=1) because of blowing-up. It will stop execution before of filling output restart file with NaN.

Recall that the routine diag is called once per baroclinic time-step if ninfo=1. However if the model blow-ups in the barotropic time step loop (step2d), you will still get a lot of NaN.

Many thanks to Richard Schmalz for suggesting this capability.

#129 arango arango Done Few performance optimizations
Description

Implemented a few performance optimizations in mpdata_adiff.F and several lmd_*.F routines, and added an option to use mpi_allreduce in distribute.F. The use of mpi_allreduce is usually more efficient in reducing communications. This will improve the performance on floats, stations, and variational data assimilation. That is, all routines that use mp_collect.

Notice at the top of distribute.F there are several local cpp options to use. They are intended to do the same tasks but at different levels of performance.

The ones that we specified by default are:

# define BOUNDARY_ALLREDUCE /* use mpi_allreduce in mp_boundary */
# undef  COLLECT_ALLGATHER  /* use mpi_allgather in mp_collect  */
# define COLLECT_ALLREDUCE  /* use mpi_allreduce in mp_collect  */
# define REDUCE_ALLGATHER   /* use mpi_allgather in mp_reduce   */
# undef  REDUCE_ALLREDUCE   /* use mpi_allreduce in mp_reduce   */

I highly recommend that you do not modify these definitions unless you know what you are you doing. It requires extensive knowledge and expertise on MPI communications. Some of the advanced MPI communication routines perform differently on several computers. These routines are usually optimized by the vendor for a particular architecture hardware.

Also notice that the vectorization directive in lmd_swfrac.F

!!DIR$ VECTOR ALWAYS

is commented out with an extra ! and will be removed during c-preprocessing. All the code lines that start with !! will be removed by cpp_clean. This directive may not be the same for all compilers.

Many thanks to Xavier Vigouroux, Remi Revire, and others at BULL high performance computers for suggesting these optimizations.

Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.