Description |
The potential density is computed always now it is and not subject to CPP options. In the equation of state, rho_eos.F, we used to have:
DO k=1,N(ng)
DO i=IstrT,IendT
rho(i,j,k)=den(i,k)
# if defined LMD_SKPP || defined LMD_BKPP || defined DIAGNOSTICS
pden(i,j,k)=(den1(i,k)-1000.0_r8)
# ifdef MASKING
pden(i,j,k)=pden(i,j,k)*rmask(i,j)
# endif
# endif
END DO
END DO
The conditional CPP statement used to compute pden is now removed. This will fix the weird values that some users were getting for potential vorticity when none of the above CPP were activated. Many thanks to Deepak Cherian for bringing this to my attention.
The obs_provenance variable was missing from the metadata file varinfo.dat. This explains the segmentation violation that some of you where getting when using the 4D-Var algorithm. I was able to reproduce this behavior when updated to a new compiler version. It is amazing what some compilers assume internally nowadays.
I also corrected some formatted statements for standard output. The recommended relationship between field width W and the number of fractional digits D in the format descriptor is W>=D+7. I was using D+6 for positive number (which is correct). Anyway, this change removes all compiling warnings.
|
Description |
Corrected few things:
- The internal logical switch ObcData(ng), used to check if boundary conditions NetCDF files are needed, was only computed by the Master node in distributed-memory applications. This introduced a parallel bug in new routine check_multifile, introduced last week. The statements to set ObcData are moved from the master node standout printing section in read_phypar. In this way, all the distributed memory nodes compute the ObcData logical switch.
- WARNING: Added a new argument to function load_s2d, which is part of inp_par.F. In routine read_phypar now we have:
CASE ('NFFILES')
Npts=load_i(Nval, Rval, Ngrids, nFfiles)
DO ng=1,Ngrids
IF (nFfiles(ng).le.0) THEN
IF (Master) WRITE (out,260) 'NFFILES', nFfiles(ng), &
& 'Must be equal or greater than one.'
exit_flag=4
RETURN
END IF
END DO
max_Ffiles=MAXVAL(nFfiles)
allocate ( FRC(max_Ffiles,Ngrids) )
allocate ( FRCids(max_Ffiles,Ngrids) )
allocate ( Ncount(max_Ffiles,Ngrids) )
FRCids(1:max_Ffiles,1:Ngrids)=-1
Ncount(1:max_Ffiles,1:Ngrids)=0
CASE ('FRCNAME')
label='FRC - forcing fields'
Npts=load_s2d(Nval, Cval, line, label, ifile, igrid, &
& nFfiles, Ncount, max_Ffiles, FRC)
This change is only relevant in nested applications to ensure that the correct initialization is done in load_s2d when the number of files to process in each grid is different. Notice that the loading function has now the following dummy qarguments:
FUNCTION load_s2d (Nval, Fname, line, label, ifile, igrid, &
& Nfiles, Ncount, idim, S)
!
!=======================================================================
! !
! This function loads input values into requested 2D structure !
! containing information about input forcing files. !
! !
! On Input: !
! !
! Nval Number of values processed (integer) !
! Fname File name(s) processed (string array) !
! line Current input line (string) !
! label I/O structure label (string) !
! ifile File structure counter (integer) !
! igrid Nested grid counter (integer) !
! Nfiles Number of input files per grid (integer vector) !
! Ncount Number of files per grid counter (integer array) !
! idim Size of structure inner dimension (integer) !
! S Derived type structure, TYPE(T_IO) !
! !
! On Output: !
! !
! ifile Updated file counter. !
! igrid Updated nested grid counter. !
! S Updated derived type structure, TYPE(T_IO). !
! load_s2d Number of output values processed. !
! !
!=======================================================================
!
- Added logic to mod_tides.F and set_tides.F so tidal data is only processed, if applicable, in the coarser grid in refinement applications. It doesn't make sense for refinement grids to process tidal forcing data from NetCDF files. The tidal forcing is done via the coarser donor grid. Recall that in refinement grids, the lateral boundary conditions are processed differently.
|
Description |
Corrected several problems with the grid refinement algorithms. The DOGBONE test case for refinement is work well now. I am now working on nesting application in the Gulf of Mexico and South China Sea to fine tune the algorithms to complex realistic applications.
- Updated several Matlab script to process the contact points NetCDF between refinement grids. This will be explained in src:ticket:615.
- Corrected the logic of how the contact grid NetCDF file is processed to load data into structure BRY_CONTACT.
- Fixed a nasty parallel bug in put_refine2d when imposing coarser grid mass flux at the finer grid physical boundaries.
- Added new routine (get_metrics) in nesting.F to process grid spacing metrics on_u and om_v, which are used to impose mass fluxes at the finer grid physical boundaries in refinement applications.
- Added new CPP option ONE_WAY to carry-out one-way nesting in refinement applications. The default is to have two-way nesting.
Additionally, corrected few routines:
- Fixed a bug in ana_m3dobc.h. Few variables are not passed as arguments, so the access is done from the field structures.
- Added logic in inquire.F for better reporting of error messages when the file name is blank (empty).
- Added implicit none to routines in file check_multifile.F and declared a local integer variable. Many thanks to Mark Hadfield for bringing this to my attention.
- Corrected a bug in npzd_iron_inp.h. Used END IF instead of END SELECT. Many thanks to Paul Mattern for reporting this problem.
|