ROMS/TOMS Developers

Algorithms Update Web Log
« Previous PageNext Page »

arango - September 3, 2006 @ 16:40
Updated Weak Constraint 4DVAR Algorithms- Comments (0)

Corrected a couple of parallel and observation-processing bugs in the weak contraint algorithms:

  • Corrected a parallel bug in the convolution of the adjoint solution in drivers: convolution.h, symmetry.h, w4dpsas_ocean.h, and w4dvar_ocean.h. Now, the adjoint solution is read in the tangent linear state arrays with a call to routine get_state, using iTLM instead of iADM in the calling arguments. Then, the routines load_TLtoAD and load_ADtoTL are used to load the solution to convolve into the appropriated state arrays before applying the adjoint and tangent linear squared-root diffusion operators, repectively. Only the interior solution is loaded (recall that get_state also puts the appropriate data in the ghost points). This is critial in parallel adjoint applications. Many thanks to Andy for finding this ellusive bug.
  • Changed the order of the squared-root adjoint and tangent linear diffusion operators in the weak constraint drivers to be consistent with the computation of the error covariance normalization coefficients and other 4DVAR related algorithms. The squared-root adjoint operator is applied first for half of the diffusion steps. The resulting solution is then filtered with the tangent linear operator for the other half of diffusion steps. Recall, that both squared-root operators are used to impose symmetry.
  • Corrected a parallel bug in the processing of the tangent linear vector TLmodVal in the weak constraint algorithms. This tangent linear vector is no longer read in routine obs_read to avoid accumulating their values with mp_collect during parallel exchanges. In weak constraint, the entire observation operator is maintained in memory. It includes all the observations for different time surveys. This requires additional logic in routines obs_write.F and ad_htobs.F. In addition, another parallel bug was fixed for more than one observation time survey applications. Only the relevant section of the observation vector is proccessed, for a particular time, using the datum indices Mstr and Mend.
  • Corrected the processing of the tangent linear model forcing terms with the adjoint vector ADmodVal in ad_htobs.F. Previously, it was processing all time surveys at once in the f_* arrays. Now, only the relevant observation time survey is processed and applied as impulse forcing.

If you get a compilation error about missing tl_balance.F and/or ad_balance.F, just do a make depend to solve the problem.

For the current updated file list .

arango - September 1, 2006 @ 16:28
Weak Constraint Observations Processing- Comments (0)

Fixed a couple of bugs:

  • Modified obs_initial.F, obs_read.F, and obs_write.F, so the observation arrays NLmodVal, TLmodVal and ObsScale are exchanged correctly in MPI applications. These arrays are now zero-out in obs_initial to allow exchanging data correctly between all tiles using mp_collect. Recall that the strategy here is to use a parallel reduction to process all these nontiled arrays. Otherwise, we will add nonzero and incorrect values to these processing observation arrays. Many thanks to Brian and Manu for reporting this problem.
  • Fixed a bug in output.F and ad_output.F when creating multiple average or diagnostic terms NetCDF files. We were ignoring the first time-averaging cycle so the file was empty because another file was created before writing the first time-averaged cycle. Now, the creation of the new NetCDF files is delayed by one time step. This bug only applies to time-averaged NetCDF. Many thanks to Andy for reporting this bug.
  • Added submit_is4dvar.bash which is required to run in some clusters. Therefore, we have the choice between csh and batch scripts.
For the current updated file list .

kate - August 29, 2006 @ 18:24
Forcing field interpolation option- Comments (2)

I’m trying this option for the first time and came across a little bug: the horizontal grid for the global CCSM CORE files have more points than my little ROMS grid. In nf_fread2d.F, wrk is dimensioned:

real(r8), dimension(2+(Lm(ng)+2)*(Mm(ng)+2)) :: wrk

This isn’t big enough in my case. Why not make it allocatable?

Also, I would dearly love to see the winds handled in the same way as all the other fields. I know it would get tricky if we wanted to say use WRF on it’s rotated grid to drive ROMS on another rotated grid, but CCSM should be simple enough. I’ll look into it.


arango - August 24, 2006 @ 23:38
IS4DVAR Estimated Initial Conditions- Comments (0)

The main goal of strong contraint 4DVar is to estimate optimal initial conditions by melding the first guess (background) state and available data for the selected assimilation time window using both the statistics of the background and observations. In the incremental algorithm, IS4DVAR, the estimated initial conditions is saved in NetCDF file INIname. This NetCDF file will have two time records at the end of the assimilation run. The first record is the new estimated initial conditions from data assimilation whereas the second record is the starting first guess. Noticed that in the driver is4dvar_ocean.h these records are, respectively, processed with index variables Lini=1 and Lbck=2. In the IS4DVAR algorithm both records are used extensively to update the background state with the estimated data assimilation increments.

I have been asked why I do store these two records in this order. Obviously, it makes more sense to store the background first (record 1) and the new estimated initial conditions on record 2 of the unlimited time-dimension. Well, it is not done in this way because this file will be used latter to initialize the nonlinear model in forecast mode. It is easier to start from record 1 than 2 without special manipulations of the standard input parameters. The default in ROMS is to read initial conditions from the first record of the NetCDF file, if not specified otherwise. This strategy is extremely important in sequential data assimilation. See previous post about the sequential IS4DVAR running script, submit_is4dvar.sh.


arango - August 24, 2006 @ 15:45
Sequential 4DVAR- Comments (0)

I made a couple of changes to the code to allow real-time sequential assimilation with the incremental 4DVAR algorithm (IS4DVAR):

  • Added a generic csh script, Bin/submit_is4dvar.sh, to carry out sequential 4DVAR experiments. That is, the state estimation is split in several assimilation cycles (time windows) as new data becomes available. This script runs both the IS4DVAR algorithm and nonlinear model. It initializes the nonlinear model with the estimated optimal initial conditions from the assimilation cycle. The nonlinear model is run to end of the assimilation cycle and then the restart file becomes the first guess (background state) for the next assimilation cycle. The user needs to have the following directory structure:
         $MYROOT/                    root directory
                /Data                Application configuration Input NetCDF files
                /Forward             Nonlinear model working directory
                /IS4DVAR             Data assimilation working directory
                /OBS                 Observations NetCDF file
                /Storage             Storage directory for estimation output NetCDF files
      

    See above script for more details. This script is generic and well documented.

  • Removed the writing of the cost function information to the nonlinear and tangent linear initial conditions NetCDF files. This allows to use the restart file in sequential data assimilation. The diagnostic cost function information is still written to the MODname NetCDF file. This also cleaned the initial conditions files used in the 4DVAR algorithms. Therefore, the following files were changed def_ini.F, tl_def_ini.F, wrt_ini.F, and tl_write_ini.F.
  • Modified inp_par.F to write horizontal and vertical convolution stability paramenters Hgamma and Vgamma to standard output.
For the current updated file list .