-
Notifications
You must be signed in to change notification settings - Fork 1
Instructions for Using WRES
This wiki describes how to use the WRES to perform an evaluation, from scoping and planning the evaluation through examining the results.
As discussed in the top-level wiki, the WRES can be used in three different modes of operation:
- a command-line application, executing one evaluation per execution;
- a long-running, local-server, listening for evaluation requests and performing them one at a time; and
- a web-service, hosted on a machine and interacted with remotely through web-service request. The Central OWP WRES (COWRES) is an example of a WRES web-service.
NOTE: Any WRES web-service has a URL, which is noted as its web-service URL
in the instructions, below.
As appropriate, the different options, above, are referenced below, particularly when the specific instructions differ depending on the option used.
The user must begin by identifying and designing the evaluation to perform. For introductory resources on forecast verification, see Introductory Resources on Forecast Verification.
(The resources mentioned below are currently only available to NOAA personnel. We are working to migrate those resources and make them available to the public in the near future.)
For a discussion on how to plan an evaluation, the user is referred to the WRES Training, Part 1: Concepts slide deck and recorded training video. Complete training materials are available in the WRES User Support Past Training Materials wiki for the Training Conducted on July 15, 2020. In general, the evaluator must answer the questions,
Why do you want to evaluate something?
What do you want to evaluate?
How should the evaluation be conducted?
From there, the evaluator must identify the evaluation data to statistics transformation pipeline:
Statistics <- Pools <- Pairs <- Data
Important points related to this are covered in the slide deck and training video linked above.
One important aspect of that evaluation plan is identifying the sources of data for both the forecast (or simulation) data to evaluate and the observation (or simulation) data against which to perform the evaluation. In general, evaluation input data can be either
- requested from a web-service, or
- flat files available on the local file system or via an online resource with the WRES having read permissions.
The following web services may be accessible options for obtaining input data:
- USGS Observations: The WRES can pull observations directly from the USGS National Water Information System (NWIS) where such observations are available. Instructions for configuration are provided via Example 5 within Complete Examples of Evaluation Declarations. Not only is WRES not responsible for the accuracy of data, but also not responsible for availability of data services nor data availability within those services. One may subscribe to NWIS service announcements at https://listserv.usgs.gov/mailman/listinfo/nwisweb-notification
- Water Resources Data Services (WRDS): WRDS web services, which can be used as a source observations, River Forecast Center (RFC) AHPS forecasts, and National Water Model (NWM) forecasts and simulations, are only accessible from within the National Water Center (NWC) network. See the user support wiki for the COWRES, which is only available to NWS users, for more information.
Files provided to the WRES must be in one of the following formats:
- WRES-Compliant CSV Files: The files must follow a specific format, described at Format Requirements for CSV Files.
- CHPS/FEWS PI-timeseries XML files: The files can be gzipped (i.e., *.xml.gz) or tarred and gzipped (see compressed archives, below). However, they may not be gzipped XML files that are then tarred (i.e., a .tar containing .xml.gz is not allowed).
- Fast-Infoset encoded CHPS/FEWS PI-timeseries XML files: The files can be gzipped (i.e., *.fi.gz) or tarred and gzipped (see compressed archives, below). However, they may not be gzipped files that are then tarred (i.e., a .tar containing .fi.gz is not allowed).
- NWS datacard format files: This format is allowed for observed or simulation data only. It is highly recommended that this format be avoided if possible. It insufficiently describes the data contained, therefore requiring specification of declaration that other formats do not.
- WRDS-JSON Format Files: WRDS services will use a JSON format for data interchange. The WRES can also read data formatted following WRDS-JSON from flat files.
- NWM v1.1 - v3.0 compliant netCDF files: To be considered NWM-compliant, the NetCDF must include the expected metadata. For more information, see Format Requirements for NetCDF Files.
- NWM data available on-line: In general, the WRES can read NWM netCDF files from any online location so long as (1) WRES has access and (2) the files are organized identically to what is found in the NOMADS; again, see Declaring the Raw NWM Data Source. For example, it can access data provided through the @para.nomads.ncep.noaa.gov@ website: https://para.nomads.ncep.noaa.gov/pub/data/nccf/com/nwm/para/. NOTE: If you have access to NWC resources, then you can obtain NWM data from the NWC D-Store; see the WRES User Support NOAA VLab project wiki for more information.
- USGS JSON (WaterML): The WRES can read files in USGS-style, JSON, WaterML format. WaterML is described here.
The following compressed archives of files can be read:
- Tarred/Gzipped Archives: The WRES can read archives of tarred/gzipped (e.g., .tgz or .tar.gz) files following any of the formats mentioned above with the exception of raw NWM data.
- Gzipped Data: The WRES can read gzipped (e.g., .gz) files following any of the formats mentioned above with the exception of raw NWM data.
Those other aspects include:
- The geographic features to evaluate and whether they should be evaluated separately or pooled into groups. For example, see Pooling geographic features.
- The desired times scale at which the data will be evaluated. The time scale is comprised of both the time period over which a value was derived, or “period”; and how the value was derived over that time period, or “function”. For example, you could evaluate the 24-hour average daily streamflow given data that is 6-hour instantaneous streamflow. You could also evaluate an accumulated precipitation amount over a 24-hour period calculated from 6-hour accumulated precipitation amounts. Optionally, rescaling may be performed using a fixed interval that is bounded by one or two month-days, such as 1 April through 31 July or 90 days beginning on 1 April. For further guidance, see Time Scale and Rescaling Time Series.
- Whether a baseline forecast system, against which the target forecast system is to be compared, will be included in an evaluation. Such a baseline forecast can have implications for which (observation, forecast) pairs are included in an evaluation, since ensuring an apples-to-apples comparison is important. If pairs for different forecast systems span different time periods or include different gaps within the same time period, then that could lead to an invalid comparison that is not apples-to-apples.
- Whether a covariate will be included in an evaluation. A covariate allows for the evaluation pairs to be constructed conditional upon another variable, such as evaluating freezing level conditional on rainfall occurrence. For further details, see Using covariates as filters.
- How the verification (observation, forecast) pairs will be pooled for calculating metrics. By default, metrics are calculated for all provided pairs, regardless of lead time. However, most often statistics will need to be computed for each lead time, independently, since there is typically a strong dependence between forecast quality and lead time. Pooling windows can also be defined relative to issued time (i.e., forecast basis time or T0), allowing for a rolling windows analysis or a seasonal analysis; as well as relative to thresholds, allowing for evaluation of high flow forecasts.
- The metrics desired. For a list of available metrics, see the List of Metrics Available.
- Whether to calculate the sampling uncertainties. For further details see Sampling uncertainty assessment.
- Whether the results are to be summarized over features or features groups. For further details, see Evaluation summary statistics.
- The outputs desired. Outputs can include ASCII Comma Separated Value (CSV) files; 2-D chart PNG files; pairs CSV files; and NetCDF files. See Instructions for Using WRES#5 Examine the WRES Output Verification Statistics.
The user must identify the data necessary for their evaluation. There are four types of data that can be supplied to WRES for an evaluation:
-
The
observed
or “left” data sources: Typically, observations or simulations against which simulations or forecasts are evaluated. -
The
predicted
or “right” data sources: Typically, simulations or forecasts that are to be evaluated. -
The
baseline
data sources: Typically, simulations or forecasts against which thepredicted
data will be compared to, for example, calculate skill scores. -
The
covariates
data sources: Additional data sets, if needed, which are used to further filter the evaluation pairs by conditioning on another variable. - other: These are files or web resources that do not contain evaluation data, but rather other data or information required for an evaluation. For example, thresholds can be provided in CSV file that the evaluation declaration points to.
The data for all sources can either be,
- a web service to which the WRES has access, such as USGS NWIS or WRDS (if running in the NWC network);
- a file local to the WRES instance and to which it has read permissions;
- a file placed online in a location the WRES can access;
- a file posted directly to the COWRES instance (if a WRES web-service instance is being used; see below;
covariates
data cannot yet be posted).
Custom observed
, predicted
, and baseline
data (not covariates
data) can be posted directly to a WRES web-service; see below. However, for "other" data, such as thresholds, the WRES web-service does not yet support posting. In those cases, the only option is to place the data local to the web-service or make it accessible online. For the COWRES, that will require WRES team assistance; please contact the WRES team through an appropriate ticket.
Declaration of an evaluation project is described in the wiki, Declaration language.
For users of a local WRES command-line application or local-server, checking, or validating, a declaration can be readily done by simply executing the evaluation and examining the logging. However, for users of a WRES web-service, validation can help you avoid posting a declaration and potentially "waiting your turn" for an evaluation worker to become available only to then see the declaration fail validation after that period of waiting. A WRES web-service offers an endpoint specifically for validation that does not require a worker to be free.
Checking can be done through a WRES web-service landing page via browser or by posting a declaration to a specific web-service endpoint. To use the landing page, in a browser navigate to https://.../api
(replace the three dots, "...", with the web-service URL), paste the declaration YAML into the text area, and click the Check button at the bottom. The browser will be forwarded to the website https://.../job/validate/html
which will display the declaration and any problems discovered under Validation Issues at the bottom. If no issues are listed (i.e., no messages are listed below Validation Issues), then the declaration check passed.
Posting a declaration to be checked can be done using a tool such as curl
or something equivalent. The command to execute will appear identical to how an evaluation is posted, with projectConfig
being set to the declaration, except the URL includes "/validate" at the end. For example:
curl -v --cacert [appropriate CA .pem file] -d "projectConfig=$(cat evaluation.yml)" https://.../job/validate
Replace "..." with the wres-service URL and make sure to employ an appropriate certificate authority .pem file for authenticating the web-service.
An evaluation can be executed using a downloaded WRES command-line application, a long-running local-server, or through a WRES web-service for those who have access to one. Each is described below.
Instructions for use of the WRES command-line application are provided in the top-level wiki.
Instructions for using a WRES local-server to execute an evaluation are provided in the wiki WRES Local Server.
Fully functional Python and bash scripts are available to support programmatic interaction with a WRES web-service, including the posting of custom data sets; instructions are provided in the WRES Scripts Usage Guide. It is recommended that users employ those scripts when possible. However, if the scripts cannot be used, perhaps because a different programming language is being used, then instructions for programmatic interaction are available in the wiki, Instructions for Programmatic Interaction with a WRES Web-service. If you need to post custom evaluation data sets to the web-service to support your evaluation, instructions are available in the wiki, Posting timeseries data directly to a WRES web‐service as inputs for a WRES job.
Instructions for a human to conduct an evaluation through interactions with a WRES web-service landing page and using command-line tools, such as curl
, to obtain the output statistics are provided in Instructions for Human Interaction with a WRES Web-service.
With the output gathered, it must then be interpreted. The user should also identify if different or additional outputs are required; in other words, if another execution of WRES is needed.
To support this examination, the WRES includes a variety of graphical and numerical output formats for statistics, including:
- Comma Separated Values (CSV; declaration:
csv2
; file extensions:.csv.gz
and.csvt
; see Output Format Description for CSV2) - Network Common Data Form (NetCDF; declaration:
netcdf2
; file extension:.nc
) - Portable Network Graphics (PNG; declaration
png
orgraphic
; file extension:.png
) - Scalable Vector Graphics (SVG; declaration
svg
; file extension:.svg
) - Protocol Buffers (Protobuf; declaration
protobuf
; file extension:.pb3
)
If another execution of WRES is required, then return to Step 3, as appropriate. If additional data is required, return to Step 2.
This step is particularly important for those using a WRES web-service to run the evaluation. While a command-line application or local-server run on the user's machine, so that the user has control over disk usage and system resources, the web-service is typically remote and not available for the user to directly interact with. The WRES web-service, therefore, has mechanisms to control use of system resources. Specifically:
- A WRES web-service provides a mechanism to delete evaluation output, which is described in the web-service instruction wikis referenced in Step 4. This is the only cleanup that a user is expected to perform themselves. It is expected that a user let the web-service know when the outputs have been gathered and can be deleted. If a user does not remove evaluation output, then the web-service administrator will do so at a time of their choosing.
- A WRES web-service "forgets" about an evaluation's results and logging after 14 days.
- A WRES web-service removes posted evaluation input when the disk is near full. The web-service administrator may also do so at a time of their choosing.
To request assistance in using the WRES, request a new feature related to an evaluation, or to report a bug, post a ticket through the GitHub project, https://github.com/NOAA-OWP/wres/issues/new.
National Weather Service users, only, should make any such requests by a posting a ticket to the WRES User Support project in NOAA's VLab.
The WRES Wiki
-
Options for Deploying and Operating the WRES
- Obtaining and using the WRES as a standalone application
- WRES Local Server
- WRES Web Service (under construction)
-
- Format Requirements for CSV Files
- Format Requirements for NetCDF Files
- Introductory Resources on Forecast Verification
- Instructions for Human Interaction with a WRES Web-service
- Instructions for Programmatic Interaction with a WRES Web-service
- Output Format Description for CSV2
- Posting timeseries data directly to a WRES web‐service as inputs for a WRES job
- WRES Scripts Usage Guide