-
Notifications
You must be signed in to change notification settings - Fork 630
Validation Case Setup Example
An example step-by-step procedure for authoring a validation case, from setting up the input files to compiling the FDS Validation Guide.
Before getting started, are you conversant in Matlab and LaTeX. No? Do yourself a favor: take a week out of your busy life and learn the basics of these tools. I guarantee your investment will payoff a thousand fold!
The purpose of this page is to document, through example, all of the steps required to get a validation case from conception to full automation, such that the FDS Validation Guide can be built without error by pressing a single button. Similar procedures also apply to the FDS Verification Guide.
The reader should take note of the files that need to be committed. The most common problem we encounter when compiling the guides is that an author will assume their case is fully committed because they are able to compile the guide on their machine, but if they have not committed all the necessary files the guide will not compile for the other project members.
Each validation case has a sub-directory in the Validation
directory of the firemodels/fds
repository. Choose the name of this directory carefully, because it will be used throughout the process. Each directory consists of the sub-directory named FDS_Input_Files
that contains all the FDS input files for the series. There is also a bash shell script called Run_All.sh
. As its name implies, this script runs the cases by copying the FDS input files into a new directory called Current_Results
and then running them. Once the cases have completed, there is another script called Process_Output.sh
that copies over just the output files from Current_Results
that are to be committed to the firemodels/out
repository. It is too time-consuming to run the validation cases each night like the verification cases, thus, we commit the necessary output files to the out
repository. The reason we have a separate repository for output is that periodically we need to obliterate older files or else the repository would get too big. There is not an easy way to obliterate only some files within a git repository, thus, the out
repository was created to make it easy to remove junk without touching important FDS source files.
For each validation case, there are corresponding directories in the firemodels/exp
repository and the firemodels/out
repository to hold the experimental data the FDS output files, respectively. Name these directories exactly the same as you did in the fds
repository. Simply follow the existing structure.
Similarly, in the Validation
folder of the fds
repository, enter the name of your case into the file called Process_All_Output.sh
following the existing syntax, in alphabetical order. This script has two purposes. First, it is read (not run) by firebot
to compile the section at the end of the FDS Validation Guide called "Summary of FDS Validation Git Statistics". This is the master list of validation cases. Second, this script is run by the person who runs all the validation cases before a release. It has a handy feature by which it only processes cases that have run to completion, and indicates if there are problems with a given case.
There are two other useful scripts in the Validation
folder: Run_Parallel.sh
and Run_Serial.sh
. As their names suggest, these scripts are used to launch all the parallel and serial jobs at once. These scripts are not used by firebot
, but they do make it easy for the person running the validation cases to launch the cases en masse.
The first thing to think about is what the validation case names will be. This is more important than it may seem at first. The reason is that the names work their way into the depths of the processing scripts through attachments to output data files, etc., and making any changes later can prove to be a nightmare. So, do all of us a favor and think carefully about the naming convention. The names ought to be descriptive, but somewhat compact. We also want to credit the test lab, sponsor, or individual researcher, if appropriate. Thus, the NRC_NIST
series are experiments that were sponsored by the US Nuclear Regulatory Commission and conducted at NIST. The Steckler_Compartment
series are experiments conducted by Ken Steckler at NIST.
The input and experimental data files ought to include the basic series name. Thus, the input files for Steckler_Compartment
are Steckler_010.fds
, Steckler_165.fds
, and so on. The numbers should be tied directly to the test report. Do not reinvent test numbers --- use whatever convention that is in the test report, no matter how illogical it might seem.
- The first issue to discuss about input files is the text editor used to create them. Be careful not to commit DOS files with "^M" carriage returns. This happens commonly with Windows-based text editors. The safe bet is to look at your files in vi (vi -b) before committing.
- Make sure your
CHID
is the same as the input filename (minus the extension). - Minimize input parameters. Do not add a parameter to the input file if it is already the FDS default.
- Minimize comments. Some users write dissertations inside their input files. Save this for the guide write up.
- Minimize the size of the output files. In most cases we do not need many slice files or time history devices. When possible, use line devices (
POINTS=...
) to gather statistics. This both reduces the clutter in the input file and reduces the size of the output. Usually the device files for validation cases are committed, and we want to minimize the size of the files.
In this section we will discuss the process of comparing the FDS results with experimental data. There are a few ways this can be done. The easiest path is if the expected results and the FDS results can be plotted with the fds/Utilities/Matlab/scripts/dataplot.m
Matlab script. This is usually the case if we are just looking at time histories of temperature, heat flux, and so on. However, if a significant amount of post-processing calculations must be done before direct comparisons can be made, it may be necessary to create your own Matlab script to generate the plots for the FDS Validation Guide. Note that at present Matlab is the standard. This is the only way we have to make everything look identical throughout the guides.
The experimental data should be organized in a simple, comma-delimited file (.csv) with the data organized into columns with simple, clear header names. Basically, the same naming conventions follow within the data file as we listed above for input filenames: no decimals, spaces, etc. Once the data have been organized, you may commit the file to the appropriate directory in the firemodels/exp
repository.
Two important quantities when comparing FDS to an experiment are the HGL temperature and height. However, neither of these quantities is measured or computed directly. Rather, they are computed from vertical array(s) of thermocouples. When you run the FDS_validation_script.m
, a script called layer_height.m
is called. This script scans all of the FDS_Output_Files
directories for all the cases listed in Validation\Process_All_Output.sh
, and whenever it finds a file that ends with _HGL.input
, it computes the HGL temperature and height for that particular case. This input file has the following format:
3 Number of TC trees
18 Number of TCs per tree
0.05 54 72 90 Height (m) of the lowest TCs, and the column numbers in the devc file
0.30 55 73 91
0.55 56 74 92
0.80 57 75 93
1.05 58 76 94
1.30 59 77 95
1.55 60 78 96
1.80 61 79 97
2.05 62 80 98
2.30 63 81 99
2.55 64 82 100
2.80 65 83 101
3.05 66 84 102
3.30 67 85 103
3.55 68 86 104
3.80 69 87 105
3.85 70 88 106
3.90 71 89 107 Height (m) of the highest TCs, and the column numbers in the devc file
0.33 0.33 0.34 Weighting factors for each TC tree
PRS_D1_devc.csv The file containing the TC data
110 Number of columns in the file
3 Row to start reading data
4.00 Actual ceiling height of the compartment
-60 Time to begin reading the data
PRS_D1_Room_2_HGL.csv File to hold the computed layer height and temperatures
You only need to commit these _HGL.input
scripts to the repository. The layer_height.m
script will automatically create the _HGL.csv
file. You can then invoke this file when running the dataplot.m
script or your own special script.
The preferred method for plotting comparisons of experimental data and FDS predictions is by using the Matlab script called fds/Utilities/Matlab/scripts/dataplot.m
. This script assumes that there are two comma-delimited spreadsheet files containing columnated data that can be plotted side by side on the same graph. Multiple pairs of columns can be plotted. If the structure of the experimental and/or FDS data is more complicated than this, then you must write your own Matlab script to process the output. That is described in the next section.
In fds/Utilities/Matlab
there is a file called FDS_validation_dataplot_inputs.csv
which contains all of the information to make the majority of plots you find in the FDS Validation Guide. The entries are organized by the type of output quantity. The first rows are for HGL (Hot Gas Layer) temperature. Most compartment fire validation cases include a comparison of HGL temperature and depth.
A detailed explanation of the columns within the dataplot input file is found in the wiki Using the Matlab script dataplot.m
If the plots produced by dataplot.m
are not appropriate for your case, you might have to write your own Matlab script to process the data. If you are Matlab savvy, this is not too hard. If not, it is a great way to learn Matlab! The most difficult part of all this for the uninitiated seems to be keeping the repository directory structure straight in their heads while creating the plotting routines. This is critical, because ultimately the script will live here:
Utilities/Matlab/scripts
But... it will be launched via the master script (FDS_validation_script.m
) which lives here:
Utilities/Matlab/
These relative directories can be confusing at first. But follow this example, and you will get things right from here on.
The first thing to do is read in the experimental data and the FDS data, perform whatever calculations are necessary, and plot the results. In the next section we will worry about the details of how the plot looks.
As usual, think of a good name for your script and add it to the master list of special scripts under "Special cases" in FDS_validation_script.m
. While you are developing the script, it is usually helpful to comment out all the other calls above your script. This allows you to run your script in "production mode" from exactly the directory where it will live permanently. Mark my words: If you decide to do all your development offline in some other directory, you will have all kinds of trouble merging everything in once you think you are finished. Please, do not think that because you can generate an Excel plot that you are anywhere close to finished.
The best advice for creating your own script is to copy the format of one that already exists.
Before you commit all your new entries to dataplot.m
or your new scripts, you should run the master Matlab script FDS_validation_script.m
and inspect the plots. However, running this entire script takes a lot of time. To save time, comment out with a %
all the scripts you don't need. Leave as is all the directory definitions. Change the line:
[saved_data,drange] = dataplot(Dataplot_Inputs_File, Working_Dir, Manuals_Dir);
to
[saved_data,drange] = dataplot(Dataplot_Inputs_File, Working_Dir, Manuals_Dir, 'My Tests');
where My Tests
is the second column of FDS_validation_dataplot_inputs.csv
, under the header Dataname
. By doing this, only these plots will be processed.
Remember to restore FDS_validation_script.m
to its original form before committing.
The LaTeX source for the guide is located in Manuals/FDS_Validation_Guide/FDS_Validation_Guide.tex
. The source files for the guide are organized by chapter, like HGL Temperature and Depth
. Edit this file with whatever editor you choose. Decide where your section should reside. Typically results are organized by output quantity, and test series names are often entered in alphabetical order. Follow the existing conventions.
As with the Matlab script, conforming to the conventions we have established in the guide is basically a matter of copying and editing an existing section.
Depending on your computer setup, there are different ways to compile the document. But the basic sequence of operations is the same. You have to run LaTeX once, then run BibTeX once, then LaTeX two more times to resolve all references. In Linux it looks like this (note that you do not include the extension for the bibtex run):
$ pdflatex FDS_Validation_Guide.tex
$ bibtex FDS_Validation_Guide
$ pdflatex FDS_Validation_Guide.tex
$ pdflatex FDS_Validation_Guide.tex
Alternatively, you can run
$ ./make_guide.sh
This script will compile the LaTeX document by running pdflatex, bibtex, pfdlatex, pdflatex, which is the sequence needed to resolve all references in the document. The script then checks for warnings or error messages associated with include files (like plots) that are not found. If all goes well, you will get something like
FDS Validation Guide built successfully!
If all is not well, you may see something like
! LaTeX Error: File SCRIPT_FIGURES/heated_channel_Tplus not found.
Note for Windows MikTek users: You may find it necessary to increase the pool_size
in the config file. Open a CMD window and type:
initexmf –edit-config-file=pdflatex
This opens a config file in notepad. Next, add the line:
pool_size=5000000
and save the file. The guide should compiled without issue after that.
Once you have compiled the guide without errors or warnings, go ahead and update your repository and commit the .tex and .bib files.
If everything has been done correctly, then our automated user, Firebot, will be able to generate your plots, and compile the guide while you slumber. If something has been missed, we will all wake to the following email: Firebot angry! Well, mildly upset. Not to worry. The great thing about the repository is that we all iterate and fix our errors as soon as possible.