Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WRF+SWAN WIND input grids #261

Open
DiegoBindoni opened this issue May 24, 2024 · 17 comments
Open

WRF+SWAN WIND input grids #261

DiegoBindoni opened this issue May 24, 2024 · 17 comments

Comments

@DiegoBindoni
Copy link

Good afternoon,
i'm running COAWST simulations with WRF+SWAN coupling, on different grids. I'm trying to compare the results with WRF standalone simulations to assess the importance of wind wave interactions.
I'm having issues with the WIND grid of SWAN. If i don't select it, the simulation runs smoothly, but when i postprocess the results, there is zero difference with WRF standalone simulations. When i add only INPGRID, without READINP, the simulation crashes. I attach here my swan input files, hoping to figure out the problem.
Thanks in advance for the help.

swan_Belgium.txt
swan_Belgium_ref3.txt

@jcwarner-usgs
Copy link
Collaborator

when it 'crashes' what actually happens?

@DiegoBindoni
Copy link
Author

WRF starts normally. They should exchange data every 27 seconds. WRF works with 27,9,3 s. SWAN with 90 and 30. At 4:30, COAWST crashes. Errfile are empty, but PRINT (attached) doesn't show the continuation of the computation of SWAN. I address hereby this problem to SWAN. Let me know if i can provide additional files. Thanks a lot for the help

coupling_belgium.txt
swan_Belgium.txt
swan_Belgium_ref3.txt
PRINT01-001.txt
PRINT02-001.txt

@jcwarner-usgs
Copy link
Collaborator

jcwarner-usgs commented May 24, 2024

ok first the models need to time step evenly into the coupling interval.
the coupling interval is set at 27s, but swan is on a 90 sec time step. so swan can not provide that data.

how about WRF at like 30 10 and 5, and SWaN at 90 and 30, with a coupling at 90 sec?

@DiegoBindoni
Copy link
Author

I did 45,15,5 in order to keep the same time step ratio of WRF, SWAN at 90,30 and coupling interval at 90.
Now it stops after 90 seconds. With this PRINT (Always no Errf)

WRF grid 1 sent data to WAVE grid 1

Timing for main: time 2022-03-29_00:01:30 on domain 3: 0.17093 elapsed seconds
Timing for main: time 2022-03-29_00:01:30 on domain 2: 1.11155 elapsed seconds
== SWAN grid 1 recv data from WRF grid 1
WRFtoSWAN Min/Max U10 (ms-1): -8.002854E+00 8.638475E+00
WRFtoSWAN Min/Max V10 (ms-1): -9.743976E+00 1.073547E+00

WRF grid 2 sent data to WAVE grid 1

WRF grid 3 sent data to WAVE grid 1

WRF grid 1 sent data to WAVE grid 2

== SWAN grid 1 recv data from WRF grid 2
WRFtoSWAN Min/Max U10 (ms-1): -9.916405E+00 8.954644E+00
WRFtoSWAN Min/Max V10 (ms-1): -1.222754E+01 9.448338E+00
== SWAN grid 1 recv data from WRF grid 3
WRFtoSWAN Min/Max U10 (ms-1): -6.579018E+00 0.000000E+00
WRFtoSWAN Min/Max V10 (ms-1): -3.851140E+00 1.964051E+00
== SWAN grid 2 recv data from WRF grid 1
WRFtoSWAN Min/Max U10 (ms-1): 0.000000E+00 0.000000E+00
WRFtoSWAN Min/Max V10 (ms-1): 0.000000E+00 0.000000E+00

WRF grid 2 sent data to WAVE grid 2

WRF grid 3 sent data to WAVE grid 2

== SWAN grid 2 recv data from WRF grid 2
WRFtoSWAN Min/Max U10 (ms-1): -9.926502E+00 2.860514E+00
WRFtoSWAN Min/Max V10 (ms-1): -4.630641E+00 9.434210E+00
== SWAN grid 2 recv data from WRF grid 3
WRFtoSWAN Min/Max U10 (ms-1): -6.583903E+00 0.000000E+00
WRFtoSWAN Min/Max V10 (ms-1): -3.938122E+00 2.019151E+00
[ealin35:6015 :0:6015] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x150c7c1c)

PRINT01-001.txt
PRINT02-001.txt

@jcwarner-usgs
Copy link
Collaborator

can you send the full output?
seems odd to have stepped for 90s and then stop.

@DiegoBindoni
Copy link
Author

log.coawst.txt
Here it is

@jcwarner-usgs
Copy link
Collaborator

looks like SWAN did not really statrt . I see
INITIAL HOTSTART MULTIPLE '/data/nobackup/bindoni/COAWST/Projects/SWANSP/HOTFILE_W2/Belgium_d01_swanrst.hot'
INIT
that is 2 init calls. can you comment out the second one, for both the swan_belgium input files and try again?

@DiegoBindoni
Copy link
Author

It still crashed. I attach the outputs. I'm running SWAN in parallel, it shouldn't be a problem as long as the hotfiles are the same number of processors no?

log.coawst1.txt
PRINT01-001.txt
PRINT02-001.txt

@jcwarner-usgs
Copy link
Collaborator

not sure. try this:
1- try both swan grids with just
INIT
swan does not seem to be starting.
2- try swan time steps to be 30 and 15.

@DiegoBindoni
Copy link
Author

Still the same problem, with the same output files. Unfortunately i cannot pinpoint where exactly it stops and why

@DiegoBindoni
Copy link
Author

belgium.h.txt
scrip_coawst_belgium.in.txt
For clarity i also attach the .h file and the scrip file

@jcwarner-usgs
Copy link
Collaborator

have you run swan nested by itself?
can you try with just 1 wrf grid or maybe 2 wrf grids?

@DiegoBindoni
Copy link
Author

Yes i ran both swan and wrf standalone models and they work. I could try, but i would like to understand why is the wind grid that creates problem. Without it works perfectly fine

@jcwarner-usgs
Copy link
Collaborator

i would like to understand also. But i need your help.
The Sandy case works with just wrf+swan and that is my reference. I am not sure why your setup does not work.
Can you try the Sandy case with just wrf+swan? that only has 2 grids, but lets see if that works for you.
If it does, then lets try your case with just 2 wrf and 2 swan.
If that works, then i would probably need to get your inputs and try it all here.

@DiegoBindoni
Copy link
Author

log.sandy.txt
PRINT01.txt
PRINT02.txt

Ok sandy test case now crashes. Could be directly a problem with the installation, but that's odd. I was pretty sure back then to have the tutorial running. Thank you again for your time

@jcwarner-usgs
Copy link
Collaborator

i dont see the error.
when you run it, have all the output go to an out file

mpirun -np 2 ./coawstM Projects/Sandy/coupling_sandy.in &> outfile.txt

@jcwarner-usgs
Copy link
Collaborator

ok. So i did a git pull, made a tar file, copied it to a new directory, untarred, set the build_coawst.sh to build the sandy case, edited the Projects/Sandy/sandy.h to be

#undef ROMS_MODEL
#define NESTING
#define WRF_MODEL
#define SWAN_MODEL
#undef WW3_MODEL
#define MCT_LIB
#undef MCT_INTERP_OC2AT
#define MCT_INTERP_WV2AT
#undef MCT_INTERP_OC2WV

/.... rest of sandy.h is unchanged

./build_coawst.sh -j 14
selected 15 for ifort, select the default build for no movig nest,
it built and created a coawstM
cp Projects Sandy/namelist .
cp Projects Sandy/wrf* .
mpirun -np 2 ./coawstM Projects/Sandy/coupling_sandy.in
and it is running.

....

WRF grid 1 recv data from WAVE grid 2

WAVtoWRF Min/Max HSIGN (m): 0.000000E+00 8.345883E+00
WAVtoWRF Min/Max WLENP (m): 0.000000E+00 2.552078E+02
WAVtoWRF Min/Max RTP (m): 0.000000E+00 1.462009E+01
== SWAN grid 2 sent data to WRF grid 2

WRF grid 2 recv data from WAVE grid 2

WAVtoWRF Min/Max HSIGN (m): 0.000000E+00 8.363349E+00
WAVtoWRF Min/Max WLENP (m): 0.000000E+00 2.552078E+02
WAVtoWRF Min/Max RTP (m): 0.000000E+00 1.462009E+01
Timing for main: time 2012-10-28_13:30:00 on domain 1: 22.06583 elapsed seconds
Timing for main: time 2012-10-28_13:30:00 on domain 1: 22.06583 elapsed seconds
+time 20121028.133300 , step 31; iteration 1; sweep 1 grid 1
Timing for Writing wrfout_d01_2012-10-28_13:30:00 for domain 1: 0.40529 elapsed seconds
Timing for Writing wrfout_d01_2012-10-28_13:30:00 for domain 1: 0.40529 elapsed seconds
+time 20121028.133300 , step 31; iteration 1; sweep 2 grid 1
+time 20121028.133300 , step 31; iteration 1; sweep 3 grid 1
+time 20121028.133300 , step 31; iteration 1; sweep 4 grid 1
+time 20121028.133130 , step 61; iteration 1; sweep 1 grid 2
+time 20121028.133130 , step 61; iteration 1; sweep 2 grid 2
Timing for Writing wrfout_d02_2012-10-28_13:30:00 for domain 2: 0.52084 elapsed seconds
Timing for Writing wrfout_d02_2012-10-28_13:30:00 for domain 2: 0.52084 elapsed seconds
+time 20121028.133130 , step 61; iteration 1; sweep 3 grid 2
+time 20121028.133130 , step 61; iteration 1; sweep 4 grid 2
Timing for main: time 2012-10-28_13:31:00 on domain 2: 2.32662 elapsed seconds
Timing for main: time 2012-10-28_13:31:00 on domain 2: 2.32662 elapsed
......

Can you try those exact steps?
is that what you did?
I am not sure what to try next.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants