Skip to content

Commit

Permalink
Merge pull request #128 from bluescarni/pr/step_cb
Browse files Browse the repository at this point in the history
Update for the new step callback API in heyoka
  • Loading branch information
bluescarni authored Aug 6, 2023
2 parents d1e689c + 03cf349 commit 97e38a7
Show file tree
Hide file tree
Showing 17 changed files with 1,972 additions and 1,439 deletions.
9 changes: 9 additions & 0 deletions doc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@ Changelog
New
~~~

- The step callbacks can now optionally implement a ``pre_hook()``
method that will be called before the first step
is taken by a ``propagate_*()`` function
(`#128 <https://github.com/bluescarni/heyoka.py/pull/128>`__).
- Introduce several vectorised overloads in the expression
API. These vectorised overloads allow to perform the same
operation on a list of expressions more efficiently
Expand Down Expand Up @@ -36,6 +40,11 @@ New
Changes
~~~~~~~

- The step callbacks are now deep-copied in multithreaded
:ref:`ensemble propagations <ensemble_prop>`
rather then being shared among threads. The aim of this change
is to reduce the likelihood of data races
(`#128 <https://github.com/bluescarni/heyoka.py/pull/128>`__).
- Comprehensive overhaul of the expression system, including:
enhanced automatic simplification capabilities for sums,
products and powers, removal of several specialised primitives
Expand Down
11 changes: 10 additions & 1 deletion doc/notebooks/Event detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,14 @@
"of the event equation at the trigger time as third argument (-1 for negative derivative, 1 for positive derivative and 0 for\n",
"zero derivative).\n",
"\n",
"```{note}\n",
"\n",
"Callbacks are always {func}`deep-copied <copy.deepcopy>` when an event object is created or copied. This behaviour\n",
"is meant to help preventing data races in stateful callbacks during multi-threaded {ref}`ensemble propagations<ensemble_prop>`.\n",
"You can disable the deep-copying behaviour by providing a custom implementation of the ``__deepcopy__`` method\n",
"for the callback, as explained in the documentation of the {mod}`copy` module.\n",
"```\n",
"\n",
"Because non-terminal event detection is performed at the end of an integration step,\n",
"when the callback is invoked the state and time of the integrator object are those *at the end* of the integration\n",
"step in which the event was detected. This bears repeating, as it is often a source of confusion\n",
Expand Down Expand Up @@ -528,7 +536,8 @@
"\n",
"- ``callback``: a callback function that will be called when the event triggers. Note that,\n",
" for terminal events, the presence of a callback is optional (whereas it is mandatory for\n",
" non-terminal events);\n",
" non-terminal events). Like for non-terminal events, the callback of a terminal event\n",
" (if present) is deep-copied into the event object during construction;\n",
"- ``cooldown``: a floating-point value representing the cooldown time for the terminal event\n",
" (see below for an explanation);\n",
"- ``direction``: a value of the ``event_direction`` enum which, like for non-terminal\n",
Expand Down
152 changes: 126 additions & 26 deletions doc/notebooks/The adaptive integrator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,13 @@
{
"data": {
"text/plain": [
"Tolerance : 2.2204460492503131e-16\n",
"Tolerance : 2.220446049250313e-16\n",
"High accuracy : false\n",
"Compact mode : false\n",
"Taylor order : 20\n",
"Dimension : 2\n",
"Time : 0.0000000000000000\n",
"State : [0.050000000000000003, 0.025000000000000001]"
"Time : 0\n",
"State : [0.05, 0.025]"
]
},
"execution_count": 1,
Expand Down Expand Up @@ -149,13 +149,13 @@
{
"data": {
"text/plain": [
"Tolerance : 2.2204460492503131e-16\n",
"Tolerance : 2.220446049250313e-16\n",
"High accuracy : false\n",
"Compact mode : false\n",
"Taylor order : 20\n",
"Dimension : 2\n",
"Time : 0.21605277478009474\n",
"State : [0.043996448369926382, -0.078442455470687983]"
"State : [0.04399644836992638, -0.07844245547068798]"
]
},
"execution_count": 3,
Expand Down Expand Up @@ -257,15 +257,19 @@
"Before moving on, we need to point out an important caveat when using the single\n",
"step functions:\n",
"\n",
"> **WARNING**: if the exact solution of the ODE system can be expressed as a polynomial function\n",
"> of time, the automatic timestep deduction algorithm may return a timestep of infinity.\n",
"> This is the case, for instance, when integrating the rectilinear motion of a free\n",
"> particle or the constant-gravity free-fall equation. In such cases, the step functions\n",
"> should be called with a finite ``max_delta_t`` argument, in order to clamp the timestep\n",
"> to a finite value.\n",
">\n",
"> Note that the ``propagate_*()`` functions (described below)\n",
"> are not affected by this issue.\n",
"```{warning}\n",
"\n",
"If the exact solution of the ODE system can be expressed as a polynomial function\n",
"of time, the automatic timestep deduction algorithm may return a timestep of infinity.\n",
"This is the case, for instance, when integrating the rectilinear motion of a free\n",
"particle or the constant-gravity free-fall equation. In such cases, the step functions\n",
"should be called with a finite ``max_delta_t`` argument, in order to clamp the timestep\n",
"to a finite value.\n",
"\n",
"Note that the ``propagate_*()`` functions (described {ref}`below<time_limited_prop>`)\n",
"are not affected by this issue.\n",
"\n",
"```\n",
"\n",
"Accessing state and time\n",
"------------------------\n",
Expand Down Expand Up @@ -340,6 +344,8 @@
"id": "b6b3e66e",
"metadata": {},
"source": [
"(time_limited_prop)=\n",
"\n",
"Time-limited propagation\n",
"------------------------\n",
"\n",
Expand Down Expand Up @@ -432,13 +438,13 @@
"Num. of steps: 97\n",
"Current time : 0.0\n",
"\n",
"Tolerance : 2.2204460492503131e-16\n",
"Tolerance : 2.220446049250313e-16\n",
"High accuracy : false\n",
"Compact mode : false\n",
"Taylor order : 20\n",
"Dimension : 2\n",
"Time : 0.0000000000000000\n",
"State : [0.050000000000000044, 0.024999999999999991]\n",
"Time : 0\n",
"State : [0.050000000000000044, 0.02499999999999999]\n",
"\n"
]
}
Expand Down Expand Up @@ -467,14 +473,15 @@
"integration will be ``err_nf_state``.\n",
"\n",
"The ``propagate_for()`` and ``propagate_until()`` methods\n",
"can be invoked with two additional optional keyword arguments:\n",
"can be invoked with additional optional keyword arguments:\n",
"\n",
"- ``max_delta_t``: similarly to the ``step()`` function, this value\n",
" represents the maximum timestep size in absolute value;\n",
"- ``callback``: this is a callable which will be invoked at the end of\n",
" each timestep, with the integrator object as only argument. If the callback returns ``True`` then the integration\n",
" will continue after the invocation of the callback, otherwise the integration\n",
" will be interrupted."
" will be interrupted;\n",
"- ``c_output``: a boolean flag that enables [continuous output](<./Dense output.ipynb>)."
]
},
{
Expand Down Expand Up @@ -508,6 +515,53 @@
"ta.propagate_until(t = .5, max_delta_t = .1, callback = cb);"
]
},
{
"cell_type": "markdown",
"id": "676509c9-7002-4a82-a9a9-6c7328c1c741",
"metadata": {},
"source": [
"Optionally, callbacks can implement a ``pre_hook()`` method that will be invoked\n",
"once *before* the first step is taken by the ``propagate_for()`` and ``propagate_until()`` methods:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f9d7051c-7ce7-42cb-8f54-c2b8b77dc5b0",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"pre_hook() invoked!\n",
"Current time: 0.6\n",
"Current time: 0.7\n",
"Current time: 0.8\n",
"Current time: 0.9\n",
"Current time: 1.0\n",
"Current time: 1.1\n",
"Current time: 1.2\n",
"Current time: 1.3\n",
"Current time: 1.4000000000000001\n",
"Current time: 1.5\n"
]
}
],
"source": [
"# Callback with pre_hook().\n",
"class cb:\n",
" def __call__(self, ta):\n",
" print(\"Current time: {}\".format(ta.time))\n",
" return True\n",
" def pre_hook(self, ta):\n",
" print(\"pre_hook() invoked!\")\n",
"\n",
"ta.propagate_until(t = 1.5, max_delta_t = .1, callback = cb());"
]
},
{
"cell_type": "markdown",
"id": "45f92420",
Expand All @@ -530,7 +584,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "b07369e2",
"metadata": {},
"outputs": [
Expand All @@ -553,7 +607,7 @@
" [-0.04990399, -0.02681336]]))"
]
},
"execution_count": 11,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -589,7 +643,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 13,
"id": "70aac3ac",
"metadata": {},
"outputs": [
Expand All @@ -616,19 +670,20 @@
"There are no special requirements on the time values in the grid (apart from the fact that they must be finite and ordered monotonically).\n",
"\n",
"The ``propagate_grid()`` method\n",
"can be invoked with two additional optional keyword arguments:\n",
"can be invoked with additional optional keyword arguments:\n",
"\n",
"- ``max_delta_t``: similarly to the ``step()`` function, this value\n",
" represents the maximum timestep size in absolute value;\n",
"- ``callback``: this is a callable which will be invoked at the end of\n",
" each timestep, with the integrator object as only argument. If the callback returns ``True`` then the integration\n",
" will continue after the invocation of the callback, otherwise the integration\n",
" will be interrupted."
" will be interrupted;\n",
"- ``c_output``: a boolean flag that enables [continuous output](<./Dense output.ipynb>)."
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 14,
"id": "4eb16baf-5196-4e8d-9a57-de2ed21ba935",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -661,6 +716,51 @@
"ta.propagate_grid(grid = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.],\n",
" max_delta_t = .1, callback = cb);"
]
},
{
"cell_type": "markdown",
"id": "c68969fc-344b-4993-8dd9-1a1eeee0588d",
"metadata": {},
"source": [
"Optionally, callbacks can implement a ``pre_hook()`` method that will be invoked\n",
"once *before* the first step is taken by the ``propagate_grid()`` method:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e196452a-153a-48ee-a831-f8543345cf75",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"pre_hook() invoked!\n",
"Current time: 1.2000000000000002\n",
"Current time: 1.3\n",
"Current time: 1.4000000000000001\n",
"Current time: 1.5\n",
"Current time: 1.6\n",
"Current time: 1.7000000000000002\n",
"Current time: 1.8\n",
"Current time: 1.9000000000000001\n",
"Current time: 2.0\n"
]
}
],
"source": [
"# Callback with pre_hook().\n",
"class cb:\n",
" def __call__(self, ta):\n",
" print(\"Current time: {}\".format(ta.time))\n",
" return True\n",
" def pre_hook(self, ta):\n",
" print(\"pre_hook() invoked!\")\n",
"\n",
"ta.propagate_grid(grid = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.],\n",
" max_delta_t = .1, callback = cb());"
]
}
],
"metadata": {
Expand All @@ -679,7 +779,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
"version": "3.10.6"
}
},
"nbformat": 4,
Expand Down
50 changes: 25 additions & 25 deletions doc/notebooks/ensemble_mode.ipynb

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions doc/notebooks/ex_system_revisited.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1403,9 +1403,9 @@
"\n",
"In conclusion, how should one choose whether to use ``diff()`` or ``diff_tensors()`` to compute symbolic derivatives in heyoka.py?\n",
"\n",
"``diff()`` should be used when the goal is to produce human-readable symbolic expressions, and when one wants to take advantage of heyoka.py's symbolic simplification capabilities.\n",
"In general, ``diff()`` should be used when the goal is to produce human-readable symbolic expressions, and when one wants to take advantage of heyoka.py's symbolic simplification capabilities.\n",
"\n",
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should use ``diff_tensors()``."
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should consider using ``diff_tensors()`` instead."
]
}
],
Expand Down
4 changes: 4 additions & 0 deletions heyoka/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ set(HEYOKA_PY_PYTHON_FILES
_test_model.py
_test_expression.py
_test_dtens.py
_test_scalar_integrator.py
_test_batch_integrator.py
_test_ensemble.py
model/__init__.py
)

Expand Down Expand Up @@ -52,6 +55,7 @@ set(_HEYOKA_PY_CORE_SOURCES
expose_batch_integrators.cpp
numpy_memory.cpp
expose_models.cpp
step_cb_wrapper.cpp
)

Python3_add_library(core MODULE WITH_SOABI ${_HEYOKA_PY_CORE_SOURCES})
Expand Down
27 changes: 23 additions & 4 deletions heyoka/_ensemble_impl.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,23 +22,42 @@ def _splat_grid(arg, ta):
# Thread-based implementation.
def _ensemble_propagate_thread(tp, ta, arg, n_iter, gen, **kwargs):
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from copy import deepcopy, copy

# Pop the multithreading options from kwargs.
max_workers = kwargs.pop("max_workers", None)

# Make deep copies of the callback argument, if present.
if "callback" in kwargs:
kwargs_list = []

for i in range(n_iter):
# Make a shallow copy of the original kwargs.
# new_kwargs will be a new dict containing
# references to the objects stored in kwargs.
new_kwargs = copy(kwargs)

# Update the callback argument in new_kwargs
# with a deep copy of the original callback object in
# kwargs.
new_kwargs.update(callback=deepcopy(kwargs["callback"]))

kwargs_list.append(new_kwargs)
else:
kwargs_list = [kwargs] * n_iter

# The worker function.
def func(i):
# Create the local integrator.
local_ta = gen(deepcopy(ta), i)

# Run the propagation.
if tp == "until":
loc_ret = local_ta.propagate_until(arg, **kwargs)
loc_ret = local_ta.propagate_until(arg, **kwargs_list[i])
elif tp == "for":
loc_ret = local_ta.propagate_for(arg, **kwargs)
loc_ret = local_ta.propagate_for(arg, **kwargs_list[i])
else:
loc_ret = local_ta.propagate_grid(_splat_grid(arg, ta), **kwargs)
loc_ret = local_ta.propagate_grid(_splat_grid(arg, ta), **kwargs_list[i])

# Return the results.
# NOTE: in batch mode, loc_ret will be single
Expand Down
Loading

0 comments on commit 97e38a7

Please sign in to comment.