You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some reason I do not understand, the append_outputs parameter in model.run() is defaulted to None and thus False in model.integrateChunkwise(). Meanwhile, the bold simulation in model.simulateBold() pulls from self.default_output.
As a result, the BOLD simulation, when running with default parameters, will ONLY use the simulated activity of last chunk to simulate the entire duration. This is especially noticable when the model is run with time-dependent external stimuli.
The examples with BOLD like example 0 aln model suffer from this issue, and it has a strong misleading effect.
The text was updated successfully, but these errors were encountered:
Edit: setting append_outputs=True does not fix the issue. The bold activity still behaves like the stimuli is not time dependent at all. The previous comment on the cause of the issue might be wrong. For my experiments setting chunkwise=False has acceptable memory cost and fixed the problem. However this needs further inspection from the team.
For some reason I do not understand, the append_outputs parameter in
model.run()
is defaulted toNone
and thusFalse
inmodel.integrateChunkwise()
. Meanwhile, the bold simulation inmodel.simulateBold()
pulls fromself.default_output
.As a result, the BOLD simulation, when running with default parameters, will ONLY use the simulated activity of last chunk to simulate the entire duration. This is especially noticable when the model is run with time-dependent external stimuli.
The examples with BOLD like example 0 aln model suffer from this issue, and it has a strong misleading effect.
The text was updated successfully, but these errors were encountered: