-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spring cleaning: removing bugs and code smells #261
Conversation
1b15
commented
Mar 5, 2024
- addresses Parameters provided in examples (and default parameters) does not perform BOLD simulation correctly #250 and Example files appear to be broken #254
- fixes inconsistencies and unexpected behaviour in the implementation and interaction of chunking, continuing runs, bold, and appending outputs
- improves maintainability by removing various code smells (e.g., low cohesion because of global bold parameter, repeated implementation of the same output saving code, etc)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #261 +/- ##
==========================================
+ Coverage 90.93% 91.07% +0.13%
==========================================
Files 65 65
Lines 5206 5172 -34
==========================================
- Hits 4734 4710 -24
+ Misses 472 462 -10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for this awesome PR. Please have a look at the comments I left.
The only thing I can see is that continue_run
might not behave as expected and that there is a change of the time axis in storeOutputsAndStates
.
if self.BOLD.shape[1] == 0: | ||
# add new data | ||
self.t_BOLD = t_BOLD_resampled | ||
self.BOLD = BOLD_resampled | ||
elif append is True: | ||
# append new data to old data | ||
self.t_BOLD = np.hstack((self.t_BOLD, t_BOLD_resampled)) | ||
self.BOLD = np.hstack((self.BOLD, BOLD_resampled)) | ||
else: | ||
# overwrite old data | ||
self.t_BOLD = t_BOLD_resampled | ||
self.BOLD = BOLD_resampled | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You seem to have the append=True
case, why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The functionality for saving outputs is already in models/model.py
.
@@ -311,7 +299,7 @@ def clearModelState(self): | |||
self.state = dotdict({}) | |||
self.outputs = dotdict({}) | |||
# reinitialize bold model | |||
if self.params.get("bold"): | |||
if self.boldInitialized: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this do the same thing as before? Reads like: if it is initialized, initialize it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference is that BOLD is now only initialized at the beginning of the first run. One only needs to clear the bold state with re-initialization if it has been initialized before.
neurolib/models/model.py
Outdated
if not hasattr(self, "t"): | ||
raise ValueError("You tried using continue_run=True on the first run.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have to error here? The user could user continue_run=True
in the first call as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed this in the latest commit
data = data[:, :: self.sample_every] | ||
else: | ||
raise ValueError(f"Don't know how to subsample data of shape {data.shape}.") | ||
if data.shape[-1] >= self.params["duration"] - self.startindt: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this if
clause prevent any previous case to run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This catches unintended subsampling of BOLD.
if continue_run: | ||
self.setInitialValuesToLastState() | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would like to allow continue_run
even in the case if there hasn't been any previous run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed this in the latest commit
self.setOutput("t", results.time.values, append=append, removeICs=False) | ||
self.setStateVariables("t", results.time.values) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changes the time axis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We previously had different time axis behaviours for model and multimodel:
- for regular models, the time axis always started at 0 because it was reset for every chunk unless it was appended
- for multimodels, the time axis started at
self.start_t
from continued runs
We adopted the multimodel behaviour with a more elegant implementation without self.start_t
.
neurolib/models/multimodel/model.py
Outdated
# set start t for next run for the last value now | ||
self.start_t = self.t[-1] | ||
if not hasattr(self, "t"): | ||
raise ValueError("You tried using continue_run=True on the first run.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise ValueError("You tried using continue_run=True on the first run.") | |
raise ValueError("You tried using `continue_run=True` on the first run.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you Vasco, I forgot to make the suggested change in the multimodel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, amazing work, thank you