-
Notifications
You must be signed in to change notification settings - Fork 2
Paraview: Dump to Pvd
As an alternative to producing vtk files via postprocessing of dump data, you may output these files directly while your simulation is running. This is accomplished via the Interpolate2DSliceToVtk observer which will create vtk files and the corresponding pvd file for a slice through the simulation which you may specify. This can be useful for test runs since slices take up much less space than complete volume data but can still provide diagnostic information.
To set this up, first prepare all the necessary files (primarily .input) for your simulation. Among these should be Observers.input which will contain instructions for how the simulation should output its data for later visualization. This is what requires modification to use Interpolate2DSliceToVtk. The file probably contains a line reading "Observers = " followed by a number of observers. To tell the simulation to use Interpolate2DSliceToVtk, add the following to the list of observers:
Interpolate2DSliceToVtk(
Input = [name of input variable];
Coords = [desired coordinate system];
Basename = [desired output directory];
Origin = [centre of slice];
NormalVec = [normal to slice];
NumOfPointsPerM = [points per unit coordinate];
Size = [points in each direction from Origin];
SizeInCoords = [turns Size into units in each direction, optional];
SpatialCoordMap = [optional spatial coordinate map];
TopologicalInterpolator = [desired topological interpolator];
)
If you are unsure what a valid choice for one of the parameters is, you can substitute in gibberish and the simulation will tell you what the available options are.
As an example, the following could be used on black hole binary data:
Interpolate2DSliceToVtk(
Input = SpatialRicciScalar;
Coords = GridToInertial::MappedCoords;
Basename = SpatialRicciScalar;
Origin = 0, 0, 0;
NormalVec = 0, 0, 1;
NumOfPointsPerM = 4;
Size = 30;
SizeInCoords = true;
SpatialCoordMap = GridToInertial::SpatialCoordMap;
TopologicalInterpolator = Spectral;
)
Note that the choice of TopologicalInterpolator will depend on the data you are observing. For black hole binary data, the Spectral interpolator is required. For hydro data, one of the available hydro observers should be used, such as FVPolynomial.
Running your simulation with this observer will create a directory named Basename within which the desired vtk and pvd will be produced. These can then be visualized with Paraview.
In order to take advantage of the parallel IO capabilities of DumpTensors, only a few additional options must be specified in the Observers.input file of a given run. Below is a list of the options for the DumpTensors observer, short descriptions of each, and the default values for each.
-Input - This is the name of the tensor that is being dumped. This is a required option and therefore has no default value.
-Parallel - If true, distributes the IO of H5 files onto a specified number of processors. Default value is true.
-FileNames - Optional. The name of the resulting XDMF file (if produced). Default value is the name given for Input.
-AxisLabels - Optional. Default value is the empty string.
-H5 - Optional. Default value is true.
-H5Prefix - Optional. The base name of the H5 files to be produced. Default value is “UntitledTensor”.
-Xdmf - Optional. If true, XdmfOutput will produce an XDMF file. This is necessary in order to visualize the data in the H5 files. Default value is true.
-XdmfCoords - Optional. The coordinates to use for the XDMF file. Default value is “GlobalCoords”. !MangleFileNames - Optional. Default value is false.
Below is an example of an input file that specifies the parallel output of H5 files and an XDMF file for a simple scalar wave simulation.
Observers =
Add(
ObservationTrigger = EveryDeltaT(DeltaT=0.01);
Observers =
DumpTensors(Input=g;Parallel=true;H5Prefix=DumpTensorTest;Xdmf=true),
)
;
To test out DumpTensors on a simple scalar wave, see the wiki entry “Running a Scalar Wave Example in SpEC”. Follow the instructions to setup the evolution. Once all of the necessary evolution files are in a directory, change the Observers.input file to allow for parallel IO (see above example). You may also alter the other input files to your liking to change the number of time steps, the amplitude of the wave, the type of subdomain, etc. Once all of the input files are in order, run the following command in the directory containing them (or create a run.sh file containing the command and run that):
mpirun -npernode <N> EvolveHyperbolicSystem
Where <N> is the number of processors that you wish to use for file IO. Each zwicky node has 12 processors, so if N>12, you will need to get additional nodes. It is important to note that, when you are specifying the size and extents of your domain, that you must have more subdomains than processors for DumpTensors to work properly. Otherwise, some processors will be left without any subdomains to work on.
Once EvolveHyperbolicSystem finishes, you should be left with several H5 files (one per time step of the simulation) and a single XDMF file (of file type .xmf) in your run directory (of xdmf=true). If you wish to look at the contents of an H5 file, run the following command:
H5dump <filename>.h5 >> temp.txt
This will create a .txt file called temp.txt that will contain an ASCII version of the contents of <filename>.h5. Note that the tensor components are presented as separate scalar datasets.
If you wish to visualize the H5 files produced, simply open the .xmf file produced by the run in Paraview. Once the .xmf file is open in Paraview, you may select which tensor components you wish to color code and visualize.
Both ParaView and VisIt (an alternative visualization software) understand the data-format of the Visualization Toolkit (VTK). In VTK, each timestep at subdomain is represented by one file. Such a file contains both the geometry information (i.e. coordinates), as well as an arbitrary number of datasets. Since these are typically many files, we place them all into a subdirectory, say Data/. The subdirectory name (Data in this example) is referred to as the "Basename". There are also index-files which tell ParaView and VisIt which of the files in Data/ belong to each timestep. The index files are different for VisIt and ParaView, and are called Data.visit (for visit) and Data.pvd (for ParaView). Data.visit and Data.pvd must have a certain structure which makes it impossible to append to these files. Therefore, the conversion routines below require that these files do not exist when they start.
Once converted, please compare the format of your pvd files to the provided examples before moving onto the next step
Most often, the .dump data will all be in a single directory, such as Lev#_AA/Run/ApparentHorizons/. If this is the case, one can use ConvertSurfaceToVtk.
To convert, run
ConvertSurfaceToVtk -b Data AhA_L08_LOut16{x,y,z}_qwerty.dump [FurtherData.dump [FurtherData2.dump [...]]]
In this case, -b Data specifies the Basename of the generated files, which will output Data.pvd, Data.visit, and a directory called Data with .vtk files inside. The .pvd and .visit files are really just lists of pointers to the individual .vtk or .vtu files in the directory Data. Subsequently, specify the x-, y-, and z-coordinate file. At the end of the command line, there may be additional data-files (generated with the same L as the coordinates) which are entered into the VTK-files.
If the data is not all in the same directory, such as Lev#_AA/Run/ApparentHorizons/, Lev#_AB/Run/ApparentHorizons, etc., you have two methods. First, if your data is in only 2 or 3 directories, you can use the single-directory method on each directory, and then manually concatenate the .pvd files produced for each one, just make sure that you delete the headers and footers between each pvd file such that there is only one header and one footer for the entire .pvd file.
Otherwise, you must instead use AMRscriptSurf.sh. This script is able to create a paraview file with a directory of vtk files for surface data that is segmented with a changing grid (due to AMR). It uses ConvertSurfaceToVtk on each directory, reorganizes all of the output vtk files into a single structure, and writes a paraview xml file that appropriately points to the vtk files in the new structure. The files that you will need are in SpEC/Support/Visualization/AMRDataCombine. First, you must add AMRscriptSurf.sh to your path. By adding <code> PATH=$PATH:CODE_HOME/Support/Visualization/AMRDataCombine </code> to your .bashrc file, you can call AMRscriptSurf.sh in the current directory without any of the scripts. You will just need to execute AMRscriptSurf.sh in any directory that has an input file.
When you execute AMRscriptSurf.sh, pass the input file as the first argument, i.e.:
AMRscriptSurf.sh [Input file]
I recommend you make a new directory to do this in, since it makes temporary links etc. along the way, and cleans up after itself. Here is an example of a working input file compatible with AMRscriptSurf.sh:
#Comments may be left in your input file for your convenience
Basename = (None, Plunge,
Merger); #Lines that do not begin with a specific header must end with a ";", otherwise, optional
#Only the end of lines may contain a ";"
Locations = /InsprialDirectory; #Note that Locations, etc., may be on multiple lines, as long as the same line headers are grouped together
Locations = /PlungeDirectory
Locations = /MergerDirectory; #May contain extra spacing, "()"s, ","s, or "="s
#At least one space is left between different arguments
ConvertSurfaceToVtk AhA_LOut12x_CoordsS2.dump AhA_LOut12y_CoordsS2.dump AhA_LOut12z_CoordsS2.dump dimless_ricciScalar_AhA.dump
Files ApparentHorizons/AhA_LOut12x_CoordsS2.dump ApparentHorizons/AhA_LOut12y_CoordsS2.dump ApparentHorizons/AhA_LOut12z_CoordsS2.dump
MoviesOfApparentHorizons/dimless_ricciScalar_AhA.dump;
Levels 5
#Optional headers below used for spin vector and trajectory processing
CopyFilesNames = COWSpin_AhA.dat, SpinDirection_AhA.dat, Trajectory_AhA.dat; #optional
CopyFiles = /ApparentHorizons/COWSpin_AhA.dat; #optional (corresponding to CopyFilesNames)
/ApparentHorizons/SpinDirection_AhA.dat;
/ApparentHorizons/Trajectory_AhA.dat;
Now, we will break apart this input file example to demonstrate how input files should be written in general.
- This input file is written for a case where you have 3 different locations for files, corresponding to an Inspiral, Plunge, and Merger.
- It assumes that in each of these locations, the files are of the form BasenameLev#_AA, etc.
- The "Basename" line specifies the basenames for each location, in order. In this case, the Insprial directories did not have a basename, so I used the keyword "None."
- The next line, "Locations," specifies the locations for each of those sets of data (the full paths relative to the directory in which AMRscriptSurf.sh is being run). Note, order does matter for these, so if you want inspiral data to come first, you need to specify it first.
- The next line gives the "ConvertSurfaceToVtk" command, without the -b option. It will not currently expand out the {x,y,z} correctly, so right now you have to specify each of the coordinate files separately.
- The line with "Files" tells the sublocations of each of the "ConvertSurfaceToVtk" files (relative to the "Run" directories under each BasenameLev#_?? directory). In this case, some of them were in ApparentHorizons and some were in MoviesOfApparentHorizons.
- The line with "Levels" specifies for which levels you want to make data. By default, it will automatically run on all levels present, (i.e., 0-9), but this is less thoroughly tested.
- The output is a directory, "Data", that contains all of the vtk files, and Data#.pvd, where # is the level number. So if you were to run on all levels, where 3, 4 and 5 are present, you would get a "Data" directory, with all vtk files in it, and a Data3.pvd, Data4.pvd, and Data5.pvd.
- The line with "CopyFilesNames" specifies the names of spin vector and trajectory .dat files which are to be converted into vtk files. If the new-format Spin_Ah?.dat file is not included, then both COWSpin_Ah?.dat and SpinDirection_Ah?.dat are needed to convert the spin vectors.
- The line with "CopyFiles" tells the sublocations of each of the "CopyFilesNames" files (relative to the "Run" directories under each BasenameLev#_?? directory). In this case, all of them were in ApparentHorizons.
NOTE: The input file is translated by SpEC/Support/Visualization/AMRDataCombine/InputToCfg.sh into a more "formal" config file before AMRscriptSurf.sh executes. Therefore, formatting of input files need not be rigorous. If errors occur, see InputToCfg.example in spec/Support/Visualization/AMRDataCombine or contact Robert McGehee, [email protected], for more details.
For reference, here is one example of a Pvd file for Surface data. Your surface data should look similar:
<?xml version="1.0"?>
<VTKFile type="Collection" version="0.1" byte_order="LittleEndian" compressor="\
vtkZLibDataCompressor">
<Collection>
<DataSet timestep="0" file="InBlackHole/data0.vtu" />
<DataSet timestep="0.5" file="InBlackHole/data1.vtu" />
<DataSet timestep="1" file="InBlackHole/data2.vtu" />
.
.
.
<DataSet timestep="756" file="MergerBlackHole/data912.vtu" />
<DataSet timestep="756.5" file="MergerBlackHole/data913.vtu" />
<DataSet timestep="757" file="MergerBlackHole/data914.vtu" />
</Collection>
</VTKFile>
ApplyObservers has been updated to parallelize across timesteps. Thus, generating vtk files has been considerably sped up. In order to take the most advantage of this change, you should grab one or more compute nodes on zwicky or whatever computer you are using before calling ApplyObservers. Here is an example call of the new options:
mpirun -npernode $CPUsPerNode ApplyObservers -TimeParallel -TPBasename $DirectoryPrefix -v -t Rho0Phys -r Scalar -domaininput HyDomain.input ApplyObservers.input
The "-t" option specifies which variables you want visualized, and it is important to make sure that their names match up perfectly. The "-r" option specifies what kind of tensor the data is ("Scalar" if it is a scalar, "1" if it is a 1-dimensional tensor, "11" if it is a 2-dimensional symmetric tensor, "12" if it is a 2-dimensional non-symmetric tensor, etc). The "-domaininput HyDomain.input" option is needed if your volume data is dumped on its own domain, as is often the case with hydro data.
ApplyObservers.input is an input file that specifies the other options necessary for converting the data from the grid frame to the inertial frame. Here is one example input file:
# ApplyObservers.input
Observers =
ConvertToVtk (
Basename = VtkDataAlex;
Input = Rho0Phys;
UseTimeAsFileLabel = true;
),
;
If you are interested in a different scalar such as "Temperature," you would need to change the "Input" variable in the "ConvertToVtk" line and the ApplyObservers command to "Temperature." Otherwise, as long as the data is dumped at the same time, the script should stay the same.
With regards to parallelization, "TimeParallel" is the new option for ApplyObservers that triggers its timestep parallelization. It will create multiple directories named by the string after the "TPBasename" option (defaults to "TimeParallelFolder" if not specified). Each of these directories will contain the vtk files. In order to merge these directories into one, run the CombineVtkFolders.sh script in the directory containing all of the newly-created directories. The script needs to be passed two basenames: the ApplyObservers directory basename (default TimeParallelFolder) and the Vtk files basename (the prefix of the .pvd file). In a directory with folders named "TimeParallelFolder0","TimeParallelFolder1", etc that each contain a directory named "AlexVtk" and the files "AlexVtk.pvd" and "AlexVtk.visit," the correct syntax is as follows:
./CombineVtkFolders TimeParallelFolder AlexVtk
Afterwards, you should have one directory named by the ApplyObservers basename that contains all the vtk files you need.
If the data is not all in the same directory, such as Lev#_AA/Run/ApparentHorizons/, Lev#_AB/Run/ApparentHorizons, etc., you have two methods. First, if your data is in only 2 or 3 directories, you can use the single-directory method on each directory, and then manually concatenate the .pvd files produced for each one, just make sure that you delete the headers and footers between each pvd file such that there is only one header and one footer for the entire .pvd file.
Otherwise, you must use AMRscriptVol.sh. This script is able to create a paraview file with a directory of vtk files for volume data that is segmented with a changing grid (due to AMR). It uses ApplyObservers on each directory, reorganizes all of the output vtk files into a single structure, and writes a paraview xml file that appropriately points to the vtk files in the new structure. The files that you will need are in SpEC/Support/Visualization/AMRDataCombine. First, you must add AMRscriptVol.sh to your path. By adding <code> PATH=$PATH:CODE_HOME/Support/Visualization/AMRDataCombine </code> to your .bashrc file, you can call AMRscriptVol.sh in the current directory without any of the scripts. You will just need to execute AMRscriptVol.sh in any directory that has an input file. When you execute AMRscriptVol.sh, pass the input file as the first argument, i.e.:
AMRscriptVol.sh [Input file]
I recommend you make a new directory to do this in, since it makes temporary links etc. along the way, and cleans up after itself. For an example of a working input file compatible with AMRscriptVol.sh, look under "AMR (AMRscriptSurf.sh)" above. The only noteworthy difference is that the "ConvertSurfaceToVtk" lines in the input file will be "ApplyObservers" lines instead. All of the input formatting remarks still hold.
This script is able to create a paraview file with a directory of vtk files for domain data that is segmented with a changing grid (due to AMR). It uses ApplyObservers on each directory, reorganizes all of the output vtk files into a single structure, and writes a paraview xml file that appropriately points to the vtk files in the new structure. The files that you will need are in SpEC/Support/Visualization/AMRDataCombine. First, you must add AMRscriptDomain.sh to your path. By adding <code> PATH=$PATH:CODE_HOME/Support/Visualization/AMRDataCombine </code> to your .bashrc file, you can call AMRscriptDomain.sh in the current directory without any of the scripts. You will just need to execute AMRscriptDomain.sh in any directory that has an input file. When you execute AMRscriptDomain.sh, pass the input file as the first argument, i.e.:
AMRscriptDomain.sh [Input file]
I recommend you make a new directory to do this in, since it makes temporary links etc. along the way, and cleans up after itself. For an example of a working input file compatible with AMRscriptDomain.sh, look under "AMR (AMRscriptSurf.sh)" above. The only noteworthy differences are that the "ConvertSurfaceToVtk" and "Files" lines are not in the in the input file and the "Levels" line is followed by a "Times" line. All of the input formatting remarks still hold.
For reference, here is one example of a Pvd file for volume data. Please note that unlike surface data, datasets have a "part" component in addition to the normal timestep component. Your volume data should look similar.
<?xml version="1.0"?>
<VTKFile type="Collection" version="0.1" byte_order="LittleEndian" compressor="\
vtkZLibDataCompressor">
<Collection>
<DataSet part="0" timestep="0" file="InSpiral/Interval0.0.0_0.vts" />
<DataSet part="1" timestep="0" file="InSpiral/Interval0.0.1_0.vts" />
<DataSet part="2" timestep="0" file="InSpiral/Interval0.1.0_0.vts" />
<DataSet part="3" timestep="0" file="InSpiral/Interval0.1.1_0.vts" />
.
.
.
<DataSet part="45" timestep="756" file="MergerSpiral/Interval5.2.1_228.vts" />
<DataSet part="46" timestep="756" file="MergerSpiral/Interval5.3.0_228.vts" />
<DataSet part="47" timestep="756" file="MergerSpiral/Interval5.3.1_228.vts" />
</Collection>
</VTKFile>
If you wish to visualize alternative types of data such as gravitational waves or black hole spin vectors, you will need to use the script "ConvertTrajectoryToVtk" located in spec/Support/Visualization. It takes input ASCII-format data and parametrizes it with respect to the time (the number of time steps is determined by the inputted data) and writes it to an unstructured mesh. If there are duplicates in the time, only the second instance of those times are included, as those second instances are the "corrected" points in the data. This file outputs as many .vtu files as there are timesteps into directory BaseName and then creates a BaseName.visit and BaseName.pvd file. Additional files containing vector data can be included. A set of .vtu files in directory BaseName_<corresponding entry of -n> is created for each additional vector data file given. Each .vtu file in one of these directories includes a vector that can be displayed in both VisIt and ParaView. It is run as follows:
ConvertTrajectoryToVtk -n Basename,[vector Basenames...] trajectorydatafile [vectordatafiles...]
OPTIONS:
-v -- Verbose
-n string,string...-- Names corresponding to the data files from
which data is being extracted. First entry
should be BaseName, the directory name for the
trajectory data VTK files. If vector data files
are given, there should be one name for each
additional data file. BaseName_<Name> will be
the directory name for the corresponding VTK
files.
trajectorydatafile -- contains information about the trajectory of
surface. May have comment fields, but file
should be formatted so that each line has four
columns in the following format: time x y z,
the last three bring the three components of
the spatial location of the trajectory.
vectordatafile1... -- contains information about the components of a
vector at each of the same times as in the
trajectorydatafile. Each file should be format-
ted the same way as above: time x y z. The
relevant columns can be extracted from files
containing unneeded data by using the
Python-script executable ApplyFormulaDat found
in /Support/DatDataManip. The executable has
help-text, which you can read in your favorite
text editor.
Both ParaView and VisIt (an alternative visualization software) understand the data-format of the Visualization Toolkit (VTK). In VTK, each timestep at subdomain is represented by one file. Such a file contains both the geometry information (i.e. coordinates), as well as an arbitrary number of datasets. Since these are typically many files, we place them all into a subdirectory, say Data/. The subdirectory name (Data in this example) is referred to as the "Basename". There are also index-files which tell ParaView and VisIt which of the files in Data/ belong to each timestep. The index files are different for VisIt and ParaView, and are called Data.visit (for visit) and Data.pvd (for ParaView). Data.visit and Data.pvd must have a certain structure which makes it impossible to append to these files. Therefore, the conversion routines below require that these files do not exist when they start. H5 files offer an advantage over the former .dump format, as all of the data necessary for the .vtk files is stored within one or two H5 files, in a single directory.
Currently, we are equipped to process surfaces, spins, and trajectories. This task is accomplished by ConvertH5SurfacetoVtk, which is located in /SpEC/Support/H5Manip/. Since the H5 files of interest should always be in a single directory, it is unnecessary to make a distinction between single- and multi-directory data.
If you only wish to acquire surface data from an H5 file, the process is quite easy. Once you are in the directory of your H5 files, run the following command to determine the location of the black hole surface data:
/<SpEC Home>/Support/H5Manip/H5list <H5 file name>.h5
Typically, the H5 file of interest will have several .tdm files within it, named according to the scalar surface data that they represent. For example, a surface H5 file may contain files called "RicciScalar.tdm", "WeylB_NN.tdm", or "SpinFunction.tdm", among others. Once you are certain of the location of the surface data (since H5 files typically come in pairs), run the following command in the directory to convert the raw data into .vtk files:
/<SpEC Home>/Support/H5Manip/ConvertH5SurfaceToVtk <H5 surface file name>.h5
Please note that several additions must be made to your PATH in ~/.bashrc for this to run properly. Please see the code's documentation for further details about paths, options, and arguments.
Running the above command should produce AhX.pvd files in your current directory, which can be converted to images with ParaviewPythonScript.py. You can check that your .pvd files are in the following format once ConvertH5SurfacetoVtk has finished running:
<?xml version="1.0"?>
<VTKFile type="Collection" version="0.1" byte_order="LittleEndian" compressor="\
vtkZLibDataCompressor">
<Collection>
<DataSet timestep="0" file="InBlackHole/data0.vtu" />
<DataSet timestep="0.5" file="InBlackHole/data1.vtu" />
<DataSet timestep="1" file="InBlackHole/data2.vtu" />
.
.
.
<DataSet timestep="756" file="MergerBlackHole/data912.vtu" />
<DataSet timestep="756.5" file="MergerBlackHole/data913.vtu" />
<DataSet timestep="757" file="MergerBlackHole/data914.vtu" />
</Collection>
</VTKFile>
NOTE: If you view these objects in the ParaView GUI, then if you have an
AhC, it will be displayed for the entire inspiral. To fix this, you will
need to open the Animation View
, then change the AhC.pvd visibility to
a Step
from 0->1 at the common horizon time.
NOTE: SpEC outputs compressed h5 files, which must be uncompressed upon reading them. For a recent ConvertH5SurfaceToVtk call operating on HorizonsDump.h5, the speed-difference was a factor of ~1000: Days vs. minutes. The speed-up could be realized by uncompressing the h5 file first using:
h5repack -f NONE HorizonsDump.h5 HorizonsDump_uncompressed.h5
Trajectory data can be converted to VTK format using commands similar to
ConvertH5TrajectoryToVtk Horizons.h5 /AhA.dir/CoordCenterInertial.dat AhA_Traj
Spin data along the trajectory can be converted to VTK using
ConvertH5VectorToVtk -trajectory /AhA.dir/CoordCenterInertial.dat Horizons.h5 /AhA.dir/chiInertial.dat AhA_Spin
The spin vectors can be drawn using the "Glyph" filter in Paraview.
H5 files containing volume data can be converted to vtk files representing a slice through the data. This is done through the Interpolate2DSliceToVtk observer and the ApplyObservers script.
To set this up, ensure that all the H5 files are in the same directory along with any necessary input files (particularly Domain.input) and history text files. These will generally be output by your simulation as needed. Next, the ApplyObservers script will require an input file itself, which can specify domain and data information among other things. What is necessary for the input file, however, is a list of observers you wish to use. For this task the aforementioned Interpolate2DSliceToVtk is required. Thus your input file must contain the following:
Observers =
Interpolate2DSliceToVtk(
Input = [name of input variable];
Coords = [coordinates to use];
Basename = [directory to save output in];
Origin = [origin representing centre of slice];
NormalVec = [normal vector of plane to slice];
NumOfPointsPerM = [density of points];
Size = [size of plane];
TopologicalInterpolator = [interpolator to use];
);
There are two optional parameters which will change the behaviour of the observer. By default, NumOfPointsPerM is the number of points per unit distance and Size is the number of points in each direction. However, providing an optional SizeInCoords = true will tell the observer to interpret Size as the number of coordinates in each direction. Additionally, a spatial coordinate map can optionally be provided with the SpatialCoordMap parameter. This is useful if, for example, you wish to interpolate Inspiral BHNS data to an inertial frame as opposed to the often default co-moving frame.
To use the observer, navigate to the folder containing the data and input file and run the following command (assuming the input file you've created is named Vis.input):
ApplyObservers -t [input variable, e.g. Rho0Phys] -r [type of variable, e.g. Scalar] -d [dimensions, presumably 3] -domaininput Domain.input Vis.input
This should produce vtk files and a corresponding pvd file within a directory named after the Basename you provided. These can then be visualized in Paraview to see the slice through all the data's timesteps. To output a single timestep you can use the -Steps option to specify how many timesteps you want ApplyObservers to process. Alternatively, -f and -l specify the first and last timestep, respectively, and "-s n" will tell ApplyObservers to output every nth timestep.