-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Non-serializable PyBOP models and datasets can not be copied or parallelized #642
Comments
Hi Yannick, This is somewhat known and does need to be fixed. At the moment, we are able to circumvent the problem with the Pints' based optimisers via their ParallelEvaluation class. But I believe this issue is showing itself in the SciPy optimisers (see: #590). For the moment, you can try using the |
Hi Brady, that would not solve the issue at hand, since I want to deepcopy an optimiser instance. Working around that may just be as much work as fixing the issue, so I'll have a cursory glance at it. Maybe a |
A deep-dive into Python class handling later, it is done: #645 |
There is one issue remaining: whenever a It's the following attributes in problem.model._built_model that are not pickleable: I didn't catch it initially since it does not occur when just running a PyBaMM model directly. Temporarily setting these attributes to |
#585 now contains an example of how to work around that parallelization issue. |
Python Version
3.11.0
Describe the bug
PyBOP models and datasets can not be pickled, which makes them unusable with my current approach to integrate EP-BOLFI into PyBOP. I need to be able to
deepcopy
them; furthermore, them not being pickleable makes it impossible to parallelize their evaluation withmultiprocessing
.It's definitely something inside PyBOP, as the original PyBaMM models serialize just fine.
Is that a known limitation, and if so, is it a necessary side-effect of some desired functionality?
Steps to reproduce the behaviour
For the models:
For the datasets:
Relevant log output
For the models:
For the datasets:
The text was updated successfully, but these errors were encountered: