You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to run my PyBaMM scripts on GPU or also HPC. Is there any modules to explain that? or maybe someone who had already tried doing so? I am unable to reduce the computation time.
Hi @kumaryash7, the 23.9rc0 pre-release version of PyBaMM now includes support for a variant of the Jax solver that provides GPU execution (#3121). You would require installing jax and jaxlib with CUDA enabled on a Linux or a Windows Subsystem for Linux (WSL2) machine. If you are on a macOS system, you would need to install the jax-metal bridge via the official Apple developer instructions. On Windows, GPU support is possible through community-maintained unofficial wheels which require an external CUDA and CuDNN installation. There are a few issues (#3371, #3422, and #3443) which will soon be resolved; you may use the unreleased version with an installation of PyBaMM from source if you wish to access this feature. The Jax solver does not require the compilation of the build-time requirements (they may be skipped if you do not want to make use of the IDAKLU solver).
from joblib import Parallel, delayed
def pybamm_simulation(soc_i,charging_protocol_i):
"""
the code need to run and just save all the results
"""
return
if name == 'main':
soc_ini=[] # input 1
charging_protocol=[] # input 2
r=Parallel(n_jobs=8)(delayed(pybamm_simulation)(soc_i,charging_protocol_i) for soc_i,charging_protocol_i in zip(soc_ini,charging_protocol))
Here is a mini working example, and I just upload the code our HPC systems and request as many CPUs as I put for n_jobs.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I am trying to run my PyBaMM scripts on GPU or also HPC. Is there any modules to explain that? or maybe someone who had already tried doing so? I am unable to reduce the computation time.
Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions