You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've just been reviewing recent changes in ncov-ingest while investigating #471
I noticed that every rule now has gotten 2 new lines benchmark: "benchmarks/rulename.txt". I'm all for benchmarking, i.e. getting runtime/memory/cpu usage information - but it seems this way of doing things is suboptimal - there's almost no intrinsic information here, one could easily write a few-line macro that adds the 2 lines to every rule based on each rule's rule name.
I'm not saying we should do this in ncov-ingest, really, snakemake should not require these verbose directives but offer a single CLI option that auto-generates the benchmark files.
But I noticed it here so thought I'll raise it.
The immediate harm of the 2 lines is that the diff I'm reviewing has a lot of noise (it's like reformatting). For now I'd be in favour of not adding benchmarks to other workflows until there's a less intrusive/verbose way of doing so.
The text was updated successfully, but these errors were encountered:
I've just been reviewing recent changes in ncov-ingest while investigating #471
I noticed that every rule now has gotten 2 new lines
benchmark: "benchmarks/rulename.txt"
. I'm all for benchmarking, i.e. getting runtime/memory/cpu usage information - but it seems this way of doing things is suboptimal - there's almost no intrinsic information here, one could easily write a few-line macro that adds the 2 lines to every rule based on each rule's rule name.I'm not saying we should do this in ncov-ingest, really, snakemake should not require these verbose directives but offer a single CLI option that auto-generates the benchmark files.
But I noticed it here so thought I'll raise it.
The immediate harm of the 2 lines is that the diff I'm reviewing has a lot of noise (it's like reformatting). For now I'd be in favour of not adding benchmarks to other workflows until there's a less intrusive/verbose way of doing so.
The text was updated successfully, but these errors were encountered: