You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
backend=local
# Hashing strategy for call-caching (3 choices)
# This parameter is for local (local/slurm/sge/pbs/lsf) backend only.
# This is important for call-caching,
# which means re-using outputs from previous/failed workflows.
# Cache will miss if different strategy is used.
# "file" method has been default for all old versions of Caper<1.0.
# "path+modtime" is a new default for Caper>=1.0,
# file: use md5sum hash (slow).
# path: use path.
# path+modtime: use path and modification time.
local-hash-strat=path+modtime
# Metadata DB for call-caching (reusing previous outputs):
# Cromwell supports restarting workflows based on a metadata DB
# DB is in-memory by default
#db=in-memory
# If you use 'caper server' then you can use one unified '--file-db'
# for all submitted workflows. In such case, uncomment the following two lines
# and defined file-db as an absolute path to store metadata of all workflows
#db=file
#file-db=
# If you use 'caper run' and want to use call-caching:
# Make sure to define different 'caper run ... --db file --file-db DB_PATH'
# for each pipeline run.
# But if you want to restart then define the same '--db file --file-db DB_PATH'
# then Caper will collect/re-use previous outputs without running the same task again
# Previous outputs will be simply hard/soft-linked.
# Local directory for localized files and Cromwell's intermediate files
# If not defined, Caper will make .caper_tmp/ on local-out-dir or CWD.
# /tmp is not recommended here since Caper store all localized data files
# on this directory (e.g. input FASTQs defined as URLs in input JSON).
local-loc-dir=/mnt/storage/hong/caper
cromwell=/home/hong/.caper/cromwell_jar/cromwell-65.jar
womtool=/home/hong/.caper/womtool_jar/womtool-65.jar
Error log
Caper automatically runs a troubleshooter for failed workflows. If it doesn't then get a WORKFLOW_ID of your failed workflow with caper list or directly use a metadata.json file on Caper's output directory.
* Found failures JSON object.
[
{
"message": "Workflow failed",
"causedBy": [
{
"message": "Job wgbs.make_conf:NA:2 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.",
"causedBy": []
},
{
"message": "Job wgbs.make_metadata_csv:NA:2 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.",
"causedBy": []
}
]
}
]
* Recursively finding failures in calls (tasks)...
==== NAME=wgbs.make_conf, STATUS=RetryableFailure, PARENT=
SHARD_IDX=-1, RC=1, JOB_ID=588529
START=2022-02-24T14:20:39.912Z, END=2022-02-24T14:20:52.057Z
STDOUT=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/execution/stdout
STDERR=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/execution/stderr
STDERR_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/execution/'
STDERR_BACKGROUND_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/execution/'
==== NAME=wgbs.make_conf, STATUS=Failed, PARENT=
SHARD_IDX=-1, RC=1, JOB_ID=592187
START=2022-02-24T14:20:54.103Z, END=2022-02-24T14:21:01.920Z
STDOUT=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/attempt-2/execution/stdout
STDERR=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/attempt-2/execution/stderr
STDERR_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/attempt-2/execution/'
STDERR_BACKGROUND_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_conf/attempt-2/execution/'
==== NAME=wgbs.make_metadata_csv, STATUS=RetryableFailure, PARENT=
SHARD_IDX=-1, RC=1, JOB_ID=588545
START=2022-02-24T14:20:40.106Z, END=2022-02-24T14:20:52.057Z
STDOUT=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/execution/stdout
STDERR=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/execution/stderr
STDERR_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/execution/'
STDERR_BACKGROUND_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/execution/'
==== NAME=wgbs.make_metadata_csv, STATUS=Failed, PARENT=
SHARD_IDX=-1, RC=1, JOB_ID=593365
START=2022-02-24T14:20:56.102Z, END=2022-02-24T14:21:03.969Z
STDOUT=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/attempt-2/execution/stdout
STDERR=/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/attempt-2/execution/stderr
STDERR_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/attempt-2/execution/'
STDERR_BACKGROUND_CONTENTS=
/home/hong/anaconda3/envs/wgbs/bin/python3: can't find '__main__' module in '/home/hong/wgbs-pipeline/wgbs/9cd9270a-b1e7-45cb-a706-260cd3685f1c/call-make_metadata_csv/attempt-2/execution/'
2022-02-24 15:21:17,695|caper.nb_subproc_thread|ERROR| Cromwell failed. returncode=1
2022-02-24 15:21:17,695|caper.cli|ERROR| Check stdout in /home/hong/wgbs-pipeline/cromwell.out.3
The text was updated successfully, but these errors were encountered:
Conda isn't supported by this pipeline. We recommend using Docker or a cloud backend (GCP, AWS) if possible.
If you can only use conda, you will need to install all the pipeline dependencies manually. In this case, it doesn't look like the pipeline scripts (in the wgbs_pipeline folder of this repo) are on your PATH
OS/Platform
Caper configuration file
Input JSON file
Error log
Caper automatically runs a troubleshooter for failed workflows. If it doesn't then get a
WORKFLOW_ID
of your failed workflow withcaper list
or directly use ametadata.json
file on Caper's output directory.The text was updated successfully, but these errors were encountered: