Skip to content

Commit

Permalink
version readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Nicholaswogan committed Apr 1, 2024
1 parent a483bbf commit 59961f2
Show file tree
Hide file tree
Showing 4 changed files with 90 additions and 6 deletions.
4 changes: 2 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
cmake_minimum_required(VERSION "3.14")
cmake_minimum_required(VERSION "3.14" )

project(FORWARDDIFF LANGUAGES Fortran)
project(FORWARDDIFF LANGUAGES Fortran VERSION "0.1.0")
set(CMAKE_Fortran_MODULE_DIRECTORY "${CMAKE_BINARY_DIR}/modules")

add_subdirectory(src)
Expand Down
84 changes: 84 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Forwarddiff

Forwarddiff allows for the computation for derivatives, gradients and Jacobians of Fortran subroutines or functions using forward mode automatic differentiation (AD). To create this package I borrowed code, syntax and inspiration from [DNAD](https://github.com/joddlehod/dnad), [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), and [a lecture series by Chris Rackauckas](https://book.sciml.ai/).

## Examples

For a comprehensive set of examples see the tests in the `test` directory. In particular, `test/fypp_example.fypp` shows how to use the [fypp](https://github.com/aradi/fypp) preprocessor to write more general, differentiable code.

Below is a simple demo that computes the derivative of the scalar function $f(x) = \sin(x)\exp(x)x^2 + 1$ at $x = 2$.

```fortran
program main
use forwarddiff, only: wp, derivative
implicit none
call example()
contains
subroutine example()
real(wp) :: x, f, dfdx
x = 2.0_wp
call derivative(fcn, x, f, dfdx)
print*,'x = ',x
print*,'f = ',f
print*,'df/dx = ',dfdx
end subroutine
function fcn(x) result(f)
use forwarddiff
type(dual), intent(in) :: x
type(dual) :: f
f = sin(x)*exp(x)*x**2.0_wp + 1.0_wp
end function
end program
```

Output:

```
x = 2.0000000000000000
f = 27.875398789713000
df/dx = 41.451068296868563
```

## Building

```sh
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
# run test
./test/test_forwarddiff
```

<!-- ## Sparse Jacobians
This package can take advantage of two types of common sparse Jacobians: banded, and block-banded. For banded Jacobians, the output array contains
$$J =
\begin{bmatrix}
\frac{df_1}{dx_1} & \frac{df_1}{dx_2} & 0 & 0 & 0 \\
\frac{df_2}{dx_1} & \frac{df_2}{dx_2} & \frac{df_2}{dx_3} & 0 & 0 \\
0 & \frac{df_3}{dx_2} & \frac{df_3}{dx_3} & \frac{df_3}{dx_4} & 0 \\
0 & 0 & \frac{df_4}{dx_3} & \frac{df_4}{dx_4} & \frac{df_4}{dx_5} \\
0 & 0 & 0 & \frac{df_5}{dx_4} & \frac{df_5}{dx_5} \\
\end{bmatrix}
\xrightarrow{\text{sparse rep.}}
\begin{bmatrix}
0 & \frac{df_1}{dx_2} & \frac{df_2}{dx_3} & \frac{df_3}{dx_4} & \frac{df_4}{dx_5} \\
\frac{df_1}{dx_1} & \frac{df_2}{dx_2} & \frac{df_3}{dx_3} & \frac{df_4}{dx_4} & \frac{df_5}{dx_5} \\
\frac{df_2}{dx_1} & \frac{df_3}{dx_2} & \frac{df_4}{dx_3} & \frac{df_5}{dx_4} & 0 \\
\end{bmatrix}
$$ -->

## Limitations

This package has the following limitations:

- The package is not compatible with all Fortran intrinsic functions. If you identify an intrinsic that should be added, please submit a pull request.

- The `jacobian` routine can only compute square Jacobians.
4 changes: 2 additions & 2 deletions test/test_forwarddiff.f90
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ subroutine test_dual()

open(unit=2,file='test.dat',status='replace',form='unformatted')

x = 10.0_wp
x = 3.0_wp
call derivative(func_operators, x, f, dfdx)
print*,f, dfdx
write(2) f, dfdx

x = 10.0_wp
x = 2.0_wp
call derivative(func_intrinsics1, x, f, dfdx)
print*,f, dfdx
write(2) f, dfdx
Expand Down
4 changes: 2 additions & 2 deletions test/test_jax.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,14 @@ def func_grad2(x):
def test():
fil = FortranFile('test.dat','r')

x = np.array(10.0,dtype=np.float32)
x = np.array(3.0,dtype=np.float32)
f = func_operators(x)
dfdx = jax.grad(func_operators)(x)
f1, dfdx1 = fil.read_record(np.float64)
print(f/f1,dfdx/dfdx1)
assert np.isclose(f,f1) and np.isclose(dfdx,dfdx1)

x = np.array(10.0,dtype=np.float32)
x = np.array(2.0,dtype=np.float32)
f = func_intrinsics1(x)
dfdx = jax.grad(func_intrinsics1)(x)
f1, dfdx1 = fil.read_record(np.float64)
Expand Down

0 comments on commit 59961f2

Please sign in to comment.