Skip to content

Commit

Permalink
Doc fixes.
Browse files Browse the repository at this point in the history
  • Loading branch information
bluescarni committed Jan 21, 2024
1 parent 7511ffb commit f5f2fbe
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 27 deletions.
36 changes: 18 additions & 18 deletions doc/notebooks/computing_derivatives.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"source": [
"The ``diff()`` function implements symbolic differentiation of an expression with respect to a variable (or a parameter) via a straightforward application of the chain rule. Repeated applications of ``diff()`` can be used to compute higher-order derivatives.\n",
"\n",
"The other way of computing derivatives in heyoka.py is the :func:`~heyoka.diff_tensors()` function. :func:`~heyoka.diff_tensors()` computes the full tensors of derivatives up to an arbitrary order. Let us see a simple example:"
"The other way of computing derivatives in heyoka.py is the {func}`~heyoka.diff_tensors()` function. {func}`~heyoka.diff_tensors()` computes the full tensors of derivatives up to an arbitrary order. Let us see a simple example:"
]
},
{
Expand All @@ -63,7 +63,7 @@
"id": "dc351857-af2e-4d10-9578-79f465691c3f",
"metadata": {},
"source": [
"Here we are asking for the derivatives of a 2-component vector-valued function with respect to all variables up to order 1. The value returned by :func:`~heyoka.diff_tensors()` is an object of type :class:`~heyoka.dtens`. Let us explore it a bit:"
"Here we are asking for the derivatives of a 2-component vector-valued function with respect to all variables up to order 1. The value returned by {func}`~heyoka.diff_tensors()` is an object of type {class}`~heyoka.dtens`. Let us explore it a bit:"
]
},
{
Expand Down Expand Up @@ -94,7 +94,7 @@
"id": "9840a4cd-70be-4441-8b42-503653a265d9",
"metadata": {},
"source": [
"The screen output shows some general information: the derivative order, the number of outputs and the differentiation arguments. :class:`~heyoka.dtens` objects have a length corresponding to the total number of derivatives stored within the object:"
"The screen output shows some general information: the derivative order, the number of outputs and the differentiation arguments. {class}`~heyoka.dtens` objects have a length corresponding to the total number of derivatives stored within the object:"
]
},
{
Expand Down Expand Up @@ -123,9 +123,9 @@
"id": "65a6f78a-c374-4786-9f51-a4fb41783334",
"metadata": {},
"source": [
"Here we have 8 derivatives in total: 2 order-0 derivatives (i.e., the original components of the function, which are returned in the :class:`~heyoka.dtens` object) and 6 order-1 derivatives (3 for each component of the function).\n",
"Here we have 8 derivatives in total: 2 order-0 derivatives (i.e., the original components of the function, which are returned in the {class}`~heyoka.dtens` object) and 6 order-1 derivatives (3 for each component of the function).\n",
"\n",
":class:`~heyoka.dtens` is a dictionary-like ordered container mapping vectors of integral indices to derivatives. It is possible to iterate over the indices vectors, so that, e.g., we can build an ordered list of all the indices vectors in ``dt``:"
"{class}`~heyoka.dtens` is a dictionary-like ordered container mapping vectors of integral indices to derivatives. It is possible to iterate over the indices vectors, so that, e.g., we can build an ordered list of all the indices vectors in ``dt``:"
]
},
{
Expand Down Expand Up @@ -163,7 +163,7 @@
"source": [
"Each index vector begins with an index representing the function component. The remaining indices represent the derivative orders with respect to the variables. Thus, the index vector ``[1, 0, 1, 0]`` is used to indicate the first-order derivative of the second function component with respect to the second variable $y$.\n",
"\n",
"Indices vector in a :class:`~heyoka.dtens` object are sorted as follows:\n",
"Indices vector in a {class}`~heyoka.dtens` object are sorted as follows:\n",
"\n",
"- first, according to the total differentiation order in ascending order,\n",
"- then, according to the function component in ascending order,\n",
Expand Down Expand Up @@ -229,7 +229,7 @@
"source": [
"That is, in sparse format the square brackets operator takes in input a pair consisting of an integer (the function component) and a list of (variable index, diff order) pairs where the diff order is always nonzero. Sparse format performs better when working with first-order derivatives of functions with many variables, a situation in which dense indices vectors would be wastefully filled with zeroes.\n",
"\n",
"We can check if a specific index vector appears in a :class:`~heyoka.dtens` object via the ``in`` operator:"
"We can check if a specific index vector appears in a {class}`~heyoka.dtens` object via the ``in`` operator:"
]
},
{
Expand Down Expand Up @@ -279,7 +279,7 @@
"id": "2976b334-4f62-41af-8eeb-38ba9948fe7a",
"metadata": {},
"source": [
"The :func:`~heyoka.dtens.get_derivatives()` method can be used to fetch all derivatives for a specific total order. For instance, we can fetch the Jacobian from ``dt``:"
"The {func}`~heyoka.dtens.get_derivatives()` method can be used to fetch all derivatives for a specific total order. For instance, we can fetch the Jacobian from ``dt``:"
]
},
{
Expand Down Expand Up @@ -348,7 +348,7 @@
"\n",
"```\n",
"\n",
"The convenience properties :attr:`~heyoka.dtens.gradient` and :attr:`~heyoka.dtens.jacobian` can be used to access the gradient and Jacobian from a :class:`~heyoka.dtens` object. For instance:"
"The convenience properties {attr}`~heyoka.dtens.gradient` and {attr}`~heyoka.dtens.jacobian` can be used to access the gradient and Jacobian from a {class}`~heyoka.dtens` object. For instance:"
]
},
{
Expand Down Expand Up @@ -383,19 +383,19 @@
"source": [
"## diff() vs diff_tensors()\n",
"\n",
"``diff()`` and :func:`~heyoka.diff_tensors()` both compute symbolic derivatives, and thus produce mathematically-equivalent results. However, the symbolic structure of the expressions returned by ``diff()`` and :func:`~heyoka.diff_tensors()` can differ in ways that have a profound impact on the runtime complexity of the numerical evaluation of the derivatives.\n",
"``diff()`` and {func}`~heyoka.diff_tensors()` both compute symbolic derivatives, and thus produce mathematically-equivalent results. However, the symbolic structure of the expressions returned by ``diff()`` and {func}`~heyoka.diff_tensors()` can differ in ways that have a profound impact on the runtime complexity of the numerical evaluation of the derivatives.\n",
"\n",
"``diff()`` always applies the chain rule in [forward mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Forward_accumulation), and it also always applies the automatic simplifications described earlier in this tutorial. It is somewhat analogue to a pen-and-paper calculation.\n",
"\n",
"By contrast, :func:`~heyoka.diff_tensors()` may choose, depending on the function being differentiated and on the differentiation order, to perform symbolic differentiation in [reverse mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation), rather than in forward mode. :func:`~heyoka.diff_tensors()` also extensively uses ``fix()`` to {ref}`disable most automatic simplifications <ex_system_prev_simpl>` with the goal of producing symbolic expressions that are as close as possible to verbatim transcriptions of the forward/reverse mode automatic differentiation (AD) algorithms.\n",
"By contrast, {func}`~heyoka.diff_tensors()` may choose, depending on the function being differentiated and on the differentiation order, to perform symbolic differentiation in [reverse mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation), rather than in forward mode. {func}`~heyoka.diff_tensors()` also extensively uses ``fix()`` to {ref}`disable most automatic simplifications <ex_system_prev_simpl>` with the goal of producing symbolic expressions that are as close as possible to verbatim transcriptions of the forward/reverse mode automatic differentiation (AD) algorithms.\n",
"\n",
"In order to clarify, let us show a concrete example where the differences between ``diff()`` and :func:`~heyoka.diff_tensors()` matter a great deal. Speelpenning's function\n",
"In order to clarify, let us show a concrete example where the differences between ``diff()`` and {func}`~heyoka.diff_tensors()` matter a great deal. Speelpenning's function\n",
"\n",
"$$\n",
"f\\left(x_1, x_2, \\ldots, x_n \\right) = x_1\\cdot x_2 \\cdot \\ldots \\cdot x_n\n",
"$$\n",
"\n",
"has often been used to ([incorrectly](https://arxiv.org/abs/1904.02990)) argue that numerical reverse-mode AD is superior to symbolic differentiation. Let us try to compute the gradient of this function first using ``diff()`` and then using :func:`~heyoka.diff_tensors()`.\n",
"has often been used to ([incorrectly](https://arxiv.org/abs/1904.02990)) argue that numerical reverse-mode AD is superior to symbolic differentiation. Let us try to compute the gradient of this function first using ``diff()`` and then using {func}`~heyoka.diff_tensors()`.\n",
"\n",
"We begin with the definition of the function (taking $n=8$):"
]
Expand Down Expand Up @@ -504,7 +504,7 @@
"id": "c1d8e0ba-2fe3-41d8-9ba5-5d9effaa0852",
"metadata": {},
"source": [
"Now let us take a look at how :func:`~heyoka.diff_tensors()` behaves instead:"
"Now let us take a look at how {func}`~heyoka.diff_tensors()` behaves instead:"
]
},
{
Expand Down Expand Up @@ -544,7 +544,7 @@
"id": "bae7373a-ffdd-4bb4-989a-7a5bdb3f5ca5",
"metadata": {},
"source": [
"When computing the gradient of a multivariate scalar function, :func:`~heyoka.diff_tensors()` automatically selects reverse-mode symbolic differentiation. We can see how the reverse-mode AD algorithm collects the terms in nested binary multiplications which occur multiple times in the gradient's components. The flattening of these binary multiplications is prevented by the use of the ``fix()`` function. As a consequence, when creating a compiled function for the evaluation of ``grad_diff_tensors``, the repeated subexpressions are recognised by heyoka.py (via a process of [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination)) and evaluated only once. Let us check:"
"When computing the gradient of a multivariate scalar function, {func}`~heyoka.diff_tensors()` automatically selects reverse-mode symbolic differentiation. We can see how the reverse-mode AD algorithm collects the terms in nested binary multiplications which occur multiple times in the gradient's components. The flattening of these binary multiplications is prevented by the use of the ``fix()`` function. As a consequence, when creating a compiled function for the evaluation of ``grad_diff_tensors``, the repeated subexpressions are recognised by heyoka.py (via a process of [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination)) and evaluated only once. Let us check:"
]
},
{
Expand Down Expand Up @@ -591,7 +591,7 @@
"id": "20c8c95d-0cd6-4cfa-afec-c2eef3c66338",
"metadata": {},
"source": [
"The total number of operations needed to evaluate the gradient has been reduced from 48 (via ``diff()``) to 18 (via :func:`~heyoka.diff_tensors()`). Indeed, we can show how the computational complexity of the evaluation of the gradient of Speelpenning's function via reverse-mode symbolic AD scales linearly $\\operatorname{O}\\left(n\\right)$ with the number of variables:"
"The total number of operations needed to evaluate the gradient has been reduced from 48 (via ``diff()``) to 18 (via {func}`~heyoka.diff_tensors()`). Indeed, we can show how the computational complexity of the evaluation of the gradient of Speelpenning's function via reverse-mode symbolic AD scales linearly $\\operatorname{O}\\left(n\\right)$ with the number of variables:"
]
},
{
Expand Down Expand Up @@ -648,11 +648,11 @@
"\n",
"## General guidelines\n",
"\n",
"In conclusion, how should one choose whether to use ``diff()`` or :func:`~heyoka.diff_tensors()` to compute symbolic derivatives in heyoka.py?\n",
"In conclusion, how should one choose whether to use ``diff()`` or {func}`~heyoka.diff_tensors()` to compute symbolic derivatives in heyoka.py?\n",
"\n",
"In general, ``diff()`` should be used when the goal is to produce human-readable symbolic expressions, and when one wants to take advantage of heyoka.py's symbolic simplification capabilities.\n",
"\n",
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should consider using :func:`~heyoka.diff_tensors()` instead."
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should consider using {func}`~heyoka.diff_tensors()` instead."
]
}
],
Expand Down
13 changes: 4 additions & 9 deletions doc/notebooks/mercury_precession.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
"id": "e38dd5e8-117a-4b4b-adfa-71199252a148",
"metadata": {},
"source": [
"We can now proceed to the creation of the integrator object:"
"We can now proceed to the creation of the integrator object. We will be using the {func}`~heyoka.hamiltonian()` function to automatically generate the equations of motion from the Hamiltonian:"
]
},
{
Expand All @@ -87,14 +87,9 @@
"source": [
"ta = hy.taylor_adaptive(\n",
" # Hamilton's equations.\n",
" [(vx, -hy.diff(Ham, x)),\n",
" (vy, -hy.diff(Ham, y)),\n",
" (vz, -hy.diff(Ham, z)),\n",
" (x, hy.diff(Ham, vx)),\n",
" (y, hy.diff(Ham, vy)),\n",
" (z, hy.diff(Ham, vz))],\n",
" hy.hamiltonian(Ham, [x, y, z], [vx, vy, vz]),\n",
" # Initial conditions.\n",
" v0 + r0\n",
" r0 + v0\n",
")"
]
},
Expand Down Expand Up @@ -133,7 +128,7 @@
"metadata": {},
"outputs": [],
"source": [
"kep_out = np.array([pk.ic2par(_[3:], _[0:3], mu = mu) for _ in out])"
"kep_out = np.array([pk.ic2par(_[0:3], _[3:], mu = mu) for _ in out])"
]
},
{
Expand Down

0 comments on commit f5f2fbe

Please sign in to comment.