From eff2cdc8601c804bbae6bcdd2767fe2dfbee66e5 Mon Sep 17 00:00:00 2001 From: Carlos Brandt Date: Fri, 13 Apr 2018 20:26:33 +0200 Subject: [PATCH] Update README.md --- README.md | 77 ++++++++++++++++++++++++------------------------------- 1 file changed, 33 insertions(+), 44 deletions(-) diff --git a/README.md b/README.md index ae52a53..8fc5bdc 100755 --- a/README.md +++ b/README.md @@ -4,30 +4,24 @@ # Swift DeepSky -The DeepSky pipeline provides a *deep* view of the Sky in *x-ray* as seen by [Neil Gehrels Swift Observatory][Swift]. -The pipeline starts with a position of the Sky, given by the user -- Right Ascension, Declination -- and from there -automatically combines *all* observations made by [Swift/XRT][XRT] up to date, automatically identifies -the objects in the field and measures their fluxes, countrates, spectral energy slope, hydrogen column density and other -parameters involved in the process, like the *effective* exposure time (*per object*). +The DeepSky pipeline provides *deep* observations of the *X-ray* sky seen by the Swift satellite -- currently named [Neil Gehrels Swift Observatory][Swift] in memory to Neil Gehrels, the former head of the mission. -Data for the processing is downloaded on the fly, not being necessary for the user to have them before hand -- -by all means, if the user has already the necessary data in his/her local storage the pipeline may use it. +The pipeline starts with a position of the Sky, given by the user -- Right Ascension, Declination -- and from there automatically combines *all* observations made by [Swift/XRT][XRT] up to date, automatically identifies the objects in the field and measures their fluxes, countrates, spectral energy slope, hydrogen column density and other parameters involved in the process, like the *effective* exposure time (*per object*). -To ease the use and portability of this package, a [Docker container is also available][dockerhub]. -The use of containers allows us to bypass the setup process and go straight to the data analysis. +Data for the processing is downloaded on the fly, not being necessary for the user to have them before hand -- by all means, if the user has already the necessary data in his/her local storage the pipeline may use it. -See section [Docker](#Docker) for instructions on using the *ready-to-use* container version; -look for the section [Install](#Install) if you want to install the source code. +To ease the use and portability of this package, a [Docker container is also available][dockerhub]. The use of containers allows us to bypass the setup process and go straight to the data analysis. -## Running it +See section [Docker](#Docker) for instructions on using the *ready-to-use* container version; look for the section [Install](#Install) if you want to install the source code. -The pipeline, when ran without arguments, will output a `help` message -like the one below: +## Running it +The pipeline, when ran without arguments, will output a `help` message like the one below: ``` + $ swift_deepsky - Usage: pipeline.sh -d { --ra --dec | --object } + Usage: pipeline.sh { --ra --dec | --object } Arguments: --ra VALUE : Right Ascension (in DEGREES) @@ -39,64 +33,62 @@ $ swift_deepsky -d|--data_archive : data archive directory; Where Swift directories-tree is. This directory is supposed to contain the last 2 levels os Swift archive usual structure: 'data_archive'/START_TIME/OBSID + -l|--label LABEL : Label output files. Otherwise object NAME or ra,dec VALUEs will be used. Options: -f|--master_table : Swift master-table. This table relates RA,DEC,START_TIME,OBSID. The 'master_table' should be a CSV file with these columns -o|--outdir : output directory; default is the current one. In 'outdir', a directory for every file from this run is created. + -u|--upload : upload final results to central archive (no personal data is taken). Default. + --noupload : not to upload final results to central archive (no personal data is taken) + --start : initial date to consider for observations selection. Format is 'dd/mm/yyyy' + --end : final date to consider for observations selection. Format is 'dd/mm/yyyy' -h|--help : this help message -q|--quiet : verbose + ``` -Apart from the coordinate/object to use as the pointing centroid, and -optionally the size of the circle to search for observation around, -the path to an existent swift archive may be given to avoid downloading -new data (notice that only a small amount, the necessary data only, -is downloaded anyway). +Apart from the coordinate/object to use as the pointing centroid, and optionally the size of the circle to search for observation around, the path to an existent swift archive may be given to avoid downloading new data (notice that only a small amount, the necessary data only, is downloaded anyway). For the records, Swift data is downloaded from the Italian Space Agency (swift.asdc.asi.it) archive. -The default Swift master table --relating (RA,DEC) coordinates to epoch of -observation (START_TIME) to observation-id (OBSID)-- is shipped together -and it contains all Swift observations as of September/2017. +The default Swift master table --relating (RA,DEC) coordinates to epoch of observation (START_TIME) to observation-id (OBSID)-- is shipped together and it contains all Swift observations as of February 28, 2018. If that is running fine, we may make a test: ```bash $ swift_deepsky --ra 34.2608 --dec 1.2455 ``` -And that process the 12 arcmin (default radius) field around RA=34.2608 and Dec=1.2455. +, which will process every observation it finds in the archive And that process the 12 arcmin (default radius) field around RA=34.2608 and Dec=1.2455. + +Or you can ask for a specific object, for example, the classic `3C279`. You can also ask for a specific time period, which we will do now by selecting only the observations in the first months of 2018: +```bash +$ swift_deepsky --object 3c279 --start 1/1/2018 --end 28/2/2018 +``` ## Docker -To use this package we just need Docker installed. -Look [#Install-Docker] for instructions about your platform. +To use this package from a container Docker must be installed, see [#Install-Docker] for instructions about your platform. **Note** -> The syntax on calling containers may be a bit ugly, don't worry; -> we will hide the ugliness under an alias. -> But I would like to explain the container' parameters so that we -> understand what is going on. +> The syntax on calling containers may be a bit ugly, don't worry; we will hide the ugliness under an alias. +> But I would like to explain the container' parameters so that we understand what is going on. -The name of the Swift-DeepSky container is `chbrandt/swift_deepsky`, -it is publicly available through the [Docker-Hub][dockerhub] +The name of the Swift-DeepSky container is `chbrandt/swift_deepsky`, it is publicly available through the [Docker-Hub][dockerhub] The `latest` version of the pipeline can be downloaded by typing ``` # docker pull chbrandt/swift_deepsky ``` -Considering we want to run the pipeline and have our results all -organized under a directory called `work` we'd use the following call: +Considering we want to run the pipeline and have our results all organized under a directory called `work` we'd use the following call: ``` # docker run -v $PWD/work:/work chbrandt/swift_deepsky ``` -`$PWD/work` means we are asking the outputs to be written to directory -`work` inside current working directory (`$PWD`). -You may use any directory you want here; if such directory does not -exist it will be created for you. +`$PWD/work` means we are asking the outputs to be written to directory `work` inside current working directory (`$PWD`). +You may use any directory you want here; if such directory does not exist it will be created for you. We can generalize the work directory and subsequent call to: ``` @@ -107,19 +99,16 @@ We can generalize the work directory and subsequent call to: ### Make it beautiful again We can `alias` such command-line to a simple, clean call. -Let's say we decide to put our results in a directory called -`sds_results` under our `Home` directory. +Let's say we decide to put our results in a directory called `sds_results` under our `Home` directory. We can then define the alias as: ``` # alias swift_deepsky="docker run --rm -v \$HOME/sds_results/work:/work chbrandt/swift_deepsky" ``` -*Notice we are defining the alias as `swift_deepsky`, but that is not -mandatory; the alias can be called whatever you like better.* +*Notice we are defining the alias as `swift_deepsky`, but that is not mandatory; the alias can be called whatever you like better.* -We may now call the pipeline as presented in [#Running-it], as if we -were running it from the source code binary: +We may now call the pipeline as presented in [#Running-it], as if we were running it from the source code binary: ``` # swift_deepsky ```