Skip to content

Commit

Permalink
Update Sphinx documentation, commit f5f2fbe [skip ci].
Browse files Browse the repository at this point in the history
  • Loading branch information
bluescarni committed Jan 21, 2024
1 parent 78ae3ab commit f011293
Show file tree
Hide file tree
Showing 40 changed files with 190 additions and 200 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
36 changes: 18 additions & 18 deletions _sources/notebooks/computing_derivatives.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"source": [
"The ``diff()`` function implements symbolic differentiation of an expression with respect to a variable (or a parameter) via a straightforward application of the chain rule. Repeated applications of ``diff()`` can be used to compute higher-order derivatives.\n",
"\n",
"The other way of computing derivatives in heyoka.py is the :func:`~heyoka.diff_tensors()` function. :func:`~heyoka.diff_tensors()` computes the full tensors of derivatives up to an arbitrary order. Let us see a simple example:"
"The other way of computing derivatives in heyoka.py is the {func}`~heyoka.diff_tensors()` function. {func}`~heyoka.diff_tensors()` computes the full tensors of derivatives up to an arbitrary order. Let us see a simple example:"
]
},
{
Expand All @@ -63,7 +63,7 @@
"id": "dc351857-af2e-4d10-9578-79f465691c3f",
"metadata": {},
"source": [
"Here we are asking for the derivatives of a 2-component vector-valued function with respect to all variables up to order 1. The value returned by :func:`~heyoka.diff_tensors()` is an object of type :class:`~heyoka.dtens`. Let us explore it a bit:"
"Here we are asking for the derivatives of a 2-component vector-valued function with respect to all variables up to order 1. The value returned by {func}`~heyoka.diff_tensors()` is an object of type {class}`~heyoka.dtens`. Let us explore it a bit:"
]
},
{
Expand Down Expand Up @@ -94,7 +94,7 @@
"id": "9840a4cd-70be-4441-8b42-503653a265d9",
"metadata": {},
"source": [
"The screen output shows some general information: the derivative order, the number of outputs and the differentiation arguments. :class:`~heyoka.dtens` objects have a length corresponding to the total number of derivatives stored within the object:"
"The screen output shows some general information: the derivative order, the number of outputs and the differentiation arguments. {class}`~heyoka.dtens` objects have a length corresponding to the total number of derivatives stored within the object:"
]
},
{
Expand Down Expand Up @@ -123,9 +123,9 @@
"id": "65a6f78a-c374-4786-9f51-a4fb41783334",
"metadata": {},
"source": [
"Here we have 8 derivatives in total: 2 order-0 derivatives (i.e., the original components of the function, which are returned in the :class:`~heyoka.dtens` object) and 6 order-1 derivatives (3 for each component of the function).\n",
"Here we have 8 derivatives in total: 2 order-0 derivatives (i.e., the original components of the function, which are returned in the {class}`~heyoka.dtens` object) and 6 order-1 derivatives (3 for each component of the function).\n",
"\n",
":class:`~heyoka.dtens` is a dictionary-like ordered container mapping vectors of integral indices to derivatives. It is possible to iterate over the indices vectors, so that, e.g., we can build an ordered list of all the indices vectors in ``dt``:"
"{class}`~heyoka.dtens` is a dictionary-like ordered container mapping vectors of integral indices to derivatives. It is possible to iterate over the indices vectors, so that, e.g., we can build an ordered list of all the indices vectors in ``dt``:"
]
},
{
Expand Down Expand Up @@ -163,7 +163,7 @@
"source": [
"Each index vector begins with an index representing the function component. The remaining indices represent the derivative orders with respect to the variables. Thus, the index vector ``[1, 0, 1, 0]`` is used to indicate the first-order derivative of the second function component with respect to the second variable $y$.\n",
"\n",
"Indices vector in a :class:`~heyoka.dtens` object are sorted as follows:\n",
"Indices vector in a {class}`~heyoka.dtens` object are sorted as follows:\n",
"\n",
"- first, according to the total differentiation order in ascending order,\n",
"- then, according to the function component in ascending order,\n",
Expand Down Expand Up @@ -229,7 +229,7 @@
"source": [
"That is, in sparse format the square brackets operator takes in input a pair consisting of an integer (the function component) and a list of (variable index, diff order) pairs where the diff order is always nonzero. Sparse format performs better when working with first-order derivatives of functions with many variables, a situation in which dense indices vectors would be wastefully filled with zeroes.\n",
"\n",
"We can check if a specific index vector appears in a :class:`~heyoka.dtens` object via the ``in`` operator:"
"We can check if a specific index vector appears in a {class}`~heyoka.dtens` object via the ``in`` operator:"
]
},
{
Expand Down Expand Up @@ -279,7 +279,7 @@
"id": "2976b334-4f62-41af-8eeb-38ba9948fe7a",
"metadata": {},
"source": [
"The :func:`~heyoka.dtens.get_derivatives()` method can be used to fetch all derivatives for a specific total order. For instance, we can fetch the Jacobian from ``dt``:"
"The {func}`~heyoka.dtens.get_derivatives()` method can be used to fetch all derivatives for a specific total order. For instance, we can fetch the Jacobian from ``dt``:"
]
},
{
Expand Down Expand Up @@ -348,7 +348,7 @@
"\n",
"```\n",
"\n",
"The convenience properties :attr:`~heyoka.dtens.gradient` and :attr:`~heyoka.dtens.jacobian` can be used to access the gradient and Jacobian from a :class:`~heyoka.dtens` object. For instance:"
"The convenience properties {attr}`~heyoka.dtens.gradient` and {attr}`~heyoka.dtens.jacobian` can be used to access the gradient and Jacobian from a {class}`~heyoka.dtens` object. For instance:"
]
},
{
Expand Down Expand Up @@ -383,19 +383,19 @@
"source": [
"## diff() vs diff_tensors()\n",
"\n",
"``diff()`` and :func:`~heyoka.diff_tensors()` both compute symbolic derivatives, and thus produce mathematically-equivalent results. However, the symbolic structure of the expressions returned by ``diff()`` and :func:`~heyoka.diff_tensors()` can differ in ways that have a profound impact on the runtime complexity of the numerical evaluation of the derivatives.\n",
"``diff()`` and {func}`~heyoka.diff_tensors()` both compute symbolic derivatives, and thus produce mathematically-equivalent results. However, the symbolic structure of the expressions returned by ``diff()`` and {func}`~heyoka.diff_tensors()` can differ in ways that have a profound impact on the runtime complexity of the numerical evaluation of the derivatives.\n",
"\n",
"``diff()`` always applies the chain rule in [forward mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Forward_accumulation), and it also always applies the automatic simplifications described earlier in this tutorial. It is somewhat analogue to a pen-and-paper calculation.\n",
"\n",
"By contrast, :func:`~heyoka.diff_tensors()` may choose, depending on the function being differentiated and on the differentiation order, to perform symbolic differentiation in [reverse mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation), rather than in forward mode. :func:`~heyoka.diff_tensors()` also extensively uses ``fix()`` to {ref}`disable most automatic simplifications <ex_system_prev_simpl>` with the goal of producing symbolic expressions that are as close as possible to verbatim transcriptions of the forward/reverse mode automatic differentiation (AD) algorithms.\n",
"By contrast, {func}`~heyoka.diff_tensors()` may choose, depending on the function being differentiated and on the differentiation order, to perform symbolic differentiation in [reverse mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation), rather than in forward mode. {func}`~heyoka.diff_tensors()` also extensively uses ``fix()`` to {ref}`disable most automatic simplifications <ex_system_prev_simpl>` with the goal of producing symbolic expressions that are as close as possible to verbatim transcriptions of the forward/reverse mode automatic differentiation (AD) algorithms.\n",
"\n",
"In order to clarify, let us show a concrete example where the differences between ``diff()`` and :func:`~heyoka.diff_tensors()` matter a great deal. Speelpenning's function\n",
"In order to clarify, let us show a concrete example where the differences between ``diff()`` and {func}`~heyoka.diff_tensors()` matter a great deal. Speelpenning's function\n",
"\n",
"$$\n",
"f\\left(x_1, x_2, \\ldots, x_n \\right) = x_1\\cdot x_2 \\cdot \\ldots \\cdot x_n\n",
"$$\n",
"\n",
"has often been used to ([incorrectly](https://arxiv.org/abs/1904.02990)) argue that numerical reverse-mode AD is superior to symbolic differentiation. Let us try to compute the gradient of this function first using ``diff()`` and then using :func:`~heyoka.diff_tensors()`.\n",
"has often been used to ([incorrectly](https://arxiv.org/abs/1904.02990)) argue that numerical reverse-mode AD is superior to symbolic differentiation. Let us try to compute the gradient of this function first using ``diff()`` and then using {func}`~heyoka.diff_tensors()`.\n",
"\n",
"We begin with the definition of the function (taking $n=8$):"
]
Expand Down Expand Up @@ -504,7 +504,7 @@
"id": "c1d8e0ba-2fe3-41d8-9ba5-5d9effaa0852",
"metadata": {},
"source": [
"Now let us take a look at how :func:`~heyoka.diff_tensors()` behaves instead:"
"Now let us take a look at how {func}`~heyoka.diff_tensors()` behaves instead:"
]
},
{
Expand Down Expand Up @@ -544,7 +544,7 @@
"id": "bae7373a-ffdd-4bb4-989a-7a5bdb3f5ca5",
"metadata": {},
"source": [
"When computing the gradient of a multivariate scalar function, :func:`~heyoka.diff_tensors()` automatically selects reverse-mode symbolic differentiation. We can see how the reverse-mode AD algorithm collects the terms in nested binary multiplications which occur multiple times in the gradient's components. The flattening of these binary multiplications is prevented by the use of the ``fix()`` function. As a consequence, when creating a compiled function for the evaluation of ``grad_diff_tensors``, the repeated subexpressions are recognised by heyoka.py (via a process of [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination)) and evaluated only once. Let us check:"
"When computing the gradient of a multivariate scalar function, {func}`~heyoka.diff_tensors()` automatically selects reverse-mode symbolic differentiation. We can see how the reverse-mode AD algorithm collects the terms in nested binary multiplications which occur multiple times in the gradient's components. The flattening of these binary multiplications is prevented by the use of the ``fix()`` function. As a consequence, when creating a compiled function for the evaluation of ``grad_diff_tensors``, the repeated subexpressions are recognised by heyoka.py (via a process of [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination)) and evaluated only once. Let us check:"
]
},
{
Expand Down Expand Up @@ -591,7 +591,7 @@
"id": "20c8c95d-0cd6-4cfa-afec-c2eef3c66338",
"metadata": {},
"source": [
"The total number of operations needed to evaluate the gradient has been reduced from 48 (via ``diff()``) to 18 (via :func:`~heyoka.diff_tensors()`). Indeed, we can show how the computational complexity of the evaluation of the gradient of Speelpenning's function via reverse-mode symbolic AD scales linearly $\\operatorname{O}\\left(n\\right)$ with the number of variables:"
"The total number of operations needed to evaluate the gradient has been reduced from 48 (via ``diff()``) to 18 (via {func}`~heyoka.diff_tensors()`). Indeed, we can show how the computational complexity of the evaluation of the gradient of Speelpenning's function via reverse-mode symbolic AD scales linearly $\\operatorname{O}\\left(n\\right)$ with the number of variables:"
]
},
{
Expand Down Expand Up @@ -648,11 +648,11 @@
"\n",
"## General guidelines\n",
"\n",
"In conclusion, how should one choose whether to use ``diff()`` or :func:`~heyoka.diff_tensors()` to compute symbolic derivatives in heyoka.py?\n",
"In conclusion, how should one choose whether to use ``diff()`` or {func}`~heyoka.diff_tensors()` to compute symbolic derivatives in heyoka.py?\n",
"\n",
"In general, ``diff()`` should be used when the goal is to produce human-readable symbolic expressions, and when one wants to take advantage of heyoka.py's symbolic simplification capabilities.\n",
"\n",
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should consider using :func:`~heyoka.diff_tensors()` instead."
"If, however, the goal is to optimise the performance of the numerical evaluation of derivatives, then one should consider using {func}`~heyoka.diff_tensors()` instead."
]
}
],
Expand Down
13 changes: 4 additions & 9 deletions _sources/notebooks/mercury_precession.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
"id": "e38dd5e8-117a-4b4b-adfa-71199252a148",
"metadata": {},
"source": [
"We can now proceed to the creation of the integrator object:"
"We can now proceed to the creation of the integrator object. We will be using the {func}`~heyoka.hamiltonian()` function to automatically generate the equations of motion from the Hamiltonian:"
]
},
{
Expand All @@ -87,14 +87,9 @@
"source": [
"ta = hy.taylor_adaptive(\n",
" # Hamilton's equations.\n",
" [(vx, -hy.diff(Ham, x)),\n",
" (vy, -hy.diff(Ham, y)),\n",
" (vz, -hy.diff(Ham, z)),\n",
" (x, hy.diff(Ham, vx)),\n",
" (y, hy.diff(Ham, vy)),\n",
" (z, hy.diff(Ham, vz))],\n",
" hy.hamiltonian(Ham, [x, y, z], [vx, vy, vz]),\n",
" # Initial conditions.\n",
" v0 + r0\n",
" r0 + v0\n",
")"
]
},
Expand Down Expand Up @@ -133,7 +128,7 @@
"metadata": {},
"outputs": [],
"source": [
"kep_out = np.array([pk.ic2par(_[3:], _[0:3], mu = mu) for _ in out])"
"kep_out = np.array([pk.ic2par(_[0:3], _[3:], mu = mu) for _ in out])"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/Batch mode overview.html
Original file line number Diff line number Diff line change
Expand Up @@ -1384,7 +1384,7 @@ <h2>Ensemble propagations<a class="headerlink" href="#ensemble-propagations" tit
</div>
</div>
<div class="cell_output docutils container">
<img alt="../_images/9a871a4075bc2d49c83d43b924c85eb6f694001a9e1b7415cf136e5a969c632a.png" src="../_images/9a871a4075bc2d49c83d43b924c85eb6f694001a9e1b7415cf136e5a969c632a.png" />
<img alt="../_images/0b004f1e2984b9bb68b7d8f68ec5e19d58a7d3e863b7a83f2952faa8b77ef734.png" src="../_images/0b004f1e2984b9bb68b7d8f68ec5e19d58a7d3e863b7a83f2952faa8b77ef734.png" />
</div>
</div>
</section>
Expand Down
8 changes: 4 additions & 4 deletions notebooks/Customising the adaptive integrator.html
Original file line number Diff line number Diff line change
Expand Up @@ -636,8 +636,8 @@ <h2>Compact mode<a class="headerlink" href="#compact-mode" title="Link to this h
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>CPU times: user 5.01 s, sys: 54.6 ms, total: 5.06 s
Wall time: 5.06 s
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>CPU times: user 5.68 s, sys: 4.49 ms, total: 5.69 s
Wall time: 5.69 s
</pre></div>
</div>
</div>
Expand All @@ -652,8 +652,8 @@ <h2>Compact mode<a class="headerlink" href="#compact-mode" title="Link to this h
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>CPU times: user 179 ms, sys: 0 ns, total: 179 ms
Wall time: 179 ms
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>CPU times: user 203 ms, sys: 0 ns, total: 203 ms
Wall time: 203 ms
</pre></div>
</div>
</div>
Expand Down
4 changes: 2 additions & 2 deletions notebooks/NeuralHamiltonianODEs.html
Original file line number Diff line number Diff line change
Expand Up @@ -636,10 +636,10 @@ <h1>Neural Hamiltonian ODEs<a class="headerlink" href="#neural-hamiltonian-odes"
</div>
</div>
<div class="cell_output docutils container">
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>[&lt;matplotlib.lines.Line2D at 0x7fb6b9b9b880&gt;]
<div class="output text_plain highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>[&lt;matplotlib.lines.Line2D at 0x7f79086878e0&gt;]
</pre></div>
</div>
<img alt="../_images/15b44644ee13c3e927159689f870ffca62b3488f9936c21f3b4f69a278322605.png" src="../_images/15b44644ee13c3e927159689f870ffca62b3488f9936c21f3b4f69a278322605.png" />
<img alt="../_images/22b6f7cd6fc3dcc789b9dfbdb45d38e24b6a91ff9bf3d55466950276e8d87d40.png" src="../_images/22b6f7cd6fc3dcc789b9dfbdb45d38e24b6a91ff9bf3d55466950276e8d87d40.png" />
</div>
</div>
<p>Clearly, the power and interest of this technique, applied to Hamiltonian systems, lies in the possibility to define some good training for the FFNN weights and biases so that the final system converges to something useful</p>
Expand Down
10 changes: 5 additions & 5 deletions notebooks/NeuralODEs.html
Original file line number Diff line number Diff line change
Expand Up @@ -694,7 +694,7 @@ <h3>Taylor Integrator<a class="headerlink" href="#taylor-integrator" title="Link
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.45627927780151367 seconds --- to build (jit) the Taylor integrator
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.4605686664581299 seconds --- to build (jit) the Taylor integrator
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -735,7 +735,7 @@ <h3>Taylor Integrator<a class="headerlink" href="#taylor-integrator" title="Link
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.000217437744140625 seconds --- to propagate using the Taylor scheme
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.0004184246063232422 seconds --- to propagate using the Taylor scheme
</pre></div>
</div>
</div>
Expand All @@ -754,7 +754,7 @@ <h3>Taylor Integrator<a class="headerlink" href="#taylor-integrator" title="Link
</div>
</div>
<div class="cell_output docutils container">
<img alt="../_images/27d76ff03dc2f950cd181a0ab708eb3fecdcafd6ec646f834c82bddd89391005.png" src="../_images/27d76ff03dc2f950cd181a0ab708eb3fecdcafd6ec646f834c82bddd89391005.png" />
<img alt="../_images/33eb17925f8e880c7eb1f154f3d9fd0cf74f6914a5cde8abcc40ec47a03475da.png" src="../_images/33eb17925f8e880c7eb1f154f3d9fd0cf74f6914a5cde8abcc40ec47a03475da.png" />
</div>
</div>
</section>
Expand Down Expand Up @@ -793,7 +793,7 @@ <h3>Scipy Counterpart<a class="headerlink" href="#scipy-counterpart" title="Link
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.0018155574798583984 seconds --- to propagate
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>--- 0.0013859272003173828 seconds --- to propagate
</pre></div>
</div>
</div>
Expand All @@ -808,7 +808,7 @@ <h3>Scipy Counterpart<a class="headerlink" href="#scipy-counterpart" title="Link
</div>
</div>
<div class="cell_output docutils container">
<img alt="../_images/e735c5d874309622476e3ef04dc2e7fb084e2423db7e9e02c9502ca23915c395.png" src="../_images/e735c5d874309622476e3ef04dc2e7fb084e2423db7e9e02c9502ca23915c395.png" />
<img alt="../_images/a1c3088f5ba7bfaf5f716eeecea4d130b360aba3706aaa23cd7ac1974d99d514.png" src="../_images/a1c3088f5ba7bfaf5f716eeecea4d130b360aba3706aaa23cd7ac1974d99d514.png" />
</div>
</div>
<p>We see a net advantage in timings using the Taylor integration scheme. Note we are here not using batch propagation, which would add an additional 2-4 factor speedup in performances.</p>
Expand Down
Loading

0 comments on commit f011293

Please sign in to comment.