Skip to content

Adding supporting code for blog post #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
27 changes: 27 additions & 0 deletions phiml-blog-supporting-code/FNO/fourierLayer.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
function layer = fourierLayer(numModes,tWidth,args)

arguments
numModes
tWidth
args.Name = ""
end
name = args.Name;

net = dlnetwork;

layers = [
identityLayer(Name="in")
spectralConvolution1dLayer(numModes,tWidth,Name="specConv")
additionLayer(2,Name="add")];

net = addLayers(net,layers);

layer = convolution1dLayer(1,tWidth,Name="fc");
net = addLayers(net,layer);

net = connectLayers(net,"in","fc");
net = connectLayers(net,"fc","add/in2");

layer = networkLayer(net,Name=name);

end
97 changes: 97 additions & 0 deletions phiml-blog-supporting-code/FNO/spectralConvolution1dLayer.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
classdef spectralConvolution1dLayer < nnet.layer.Layer ...
Copy link
Collaborator

@bwdGitHub bwdGitHub Jun 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect we could use the same version here and in the 1d FNO example - maybe we should consider a directory of shared tools for the examples.

That's something we'll have to look into in future I think, not something to try deal with in this PR.

& nnet.layer.Formattable ...
& nnet.layer.Acceleratable
% spectralConvolution1dLayer 1-D Spectral Convolution Layer

properties
NumChannels
OutputSize
NumModes
end

properties (Learnable)
Weights
end

methods
function this = spectralConvolution1dLayer(numModes,outChannels,args)
% spectralConvolution1dLayer 1-D Spectral Convolution Layer
%
% layer = spectralConvolution1dLayer(outChannels, numModes)
% creates a spectral convolution 1d layer. outChannels
% specifies the number of channels in the layer output.
% numModes specifies the number of modes which are combined
% in Fourier space.
%
% layer = spectralConvolution1dLayer(outChannels, numModes,
% Name=Value) specifies additional options using one or more
% name-value arguments:
%
% Name - Name for the layer. The default value is "".
%
% Weights - Complex learnable array of size
% (inChannels)x(outChannels)x(numModes). The
% default value is [].

arguments
numModes (1,1) double
outChannels (1,1) double
args.Name {mustBeTextScalar} = "spectralConv1d"
args.Weights = []
end

this.OutputSize = outChannels;
this.NumModes = numModes;
this.Name = args.Name;
this.Weights = args.Weights;
end

function this = initialize(this, ndl)
inChannels = ndl.Size( finddim(ndl,'C') );
outChannels = this.OutputSize;
numModes = this.NumModes;

if isempty(this.Weights)
this.NumChannels = inChannels;
this.Weights = 1./(inChannels*outChannels).*( ...
rand([inChannels outChannels numModes]) + ...
1i.*rand([inChannels outChannels numModes]) );
else
assert( inChannels == this.NumChannels, 'The input channel size must match the layer' );
end
end

function y = predict(this, x)
% First compute the rfft, normalized and one-sided
x = real(x);
x = stripdims(x);
N = size(x, 1);
xft = iRFFT(x, 1, N);

% Multiply selected Fourier modes
xft = permute(xft(1:this.NumModes, :, :), [3 2 1]);
yft = pagemtimes( xft, this.Weights );
yft = permute(yft, [3 2 1]);

S = floor(N/2)+1 - this.NumModes;
yft = cat(1, yft, zeros([S size(yft, 2:3)], 'like', yft));

% Return to physical space via irfft, normalized and one-sided
y = iIRFFT(yft, 1, N);

% Re-apply labels
y = dlarray(y, 'SCB');
y = real(y);
end
end
end

function y = iRFFT(x, dim, N)
y = fft(x, [], dim);
y = y(1:floor(N/2)+1, :, :)./N;
end

function y = iIRFFT(x, dim, N)
x(end+1:N, :, :, :) = conj( x(ceil(N/2):-1:2, :, :, :) );
y = ifft(N.*x, [], dim, 'symmetric');
end
47 changes: 47 additions & 0 deletions phiml-blog-supporting-code/FNO/trainingPartitions.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
function varargout = trainingPartitions(numObservations,splits)
%TRAININGPARTITONS Random indices for splitting training data
% [idx1,...,idxN] = trainingPartitions(numObservations,splits) returns
% random vectors of indices to help split a data set with the specified
% number of observations, where SPLITS is a vector of length N of
% partition sizes that sum to one.
%
% % Example: Get indices for 50%-50% training-test split of 500
% % observations.
% [idxTrain,idxTest] = trainingPartitions(500,[0.5 0.5])
%
% % Example: Get indices for 80%-10%-10% training, validation, test split
% % of 500 observations.
% [idxTrain,idxValidation,idxTest] = trainingPartitions(500,[0.8 0.1 0.1])

arguments
numObservations (1,1) {mustBePositive}
splits {mustBeVector,mustBeInRange(splits,0,1,"exclusive"),mustSumToOne}
end

numPartitions = numel(splits);
varargout = cell(1,numPartitions);

idx = randperm(numObservations);

idxEnd = 0;

for i = 1:numPartitions-1
idxStart = idxEnd + 1;
idxEnd = idxStart + floor(splits(i)*numObservations) - 1;

varargout{i} = idx(idxStart:idxEnd);
end

% Last partition.
varargout{end} = idx(idxEnd+1:end);

end

function mustSumToOne(v)
% Validate that value sums to one.

if sum(v,"all") ~= 1
error("Value must sum to one.")
end

end
Binary file not shown.
Binary file not shown.
Binary file not shown.
9 changes: 9 additions & 0 deletions phiml-blog-supporting-code/PINN/solveNonlinearPendulum.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
function sol = solveNonlinearPendulum(tspan,omega0,thetaDot0)

F = ode;
F.ODEFcn = @(t,x) [x(2); -omega0^2.*sin(x(1))];
F.InitialValue = [0; thetaDot0];
F.Solver = "ode45";
sol = solve(F,tspan);

end
32 changes: 32 additions & 0 deletions phiml-blog-supporting-code/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Physics Informed Machine Learning Methods and Implementation supporting code
This collection provides examples demonstrating a variety of Physics-Informed Machine Learning (PhiML) techniques, applied to a simple pendulum system. The examples provided here are discussed in detail in the accompanying blog post "Physics-Informed Machine Learning: Methods and Implementation".

![hippo](https://www.mathworks.com/help/examples/symbolic/win64/SimulateThePhysicsOfAPendulumsPeriodicSwingExample_10.gif)

The techniques described for the pendulum are categorized by their primary objectives:
- **Modeling complex systems from data**
1. **Neural Ordinary Differential Equation**: learns underlying dynamics $f$ in the differential equation $\frac{d \mathbf{x}}{d t} = f(\mathbf{x},\mathbf{u})$ using a neural network.
2. **Neural State-Space**: learns both the system dynamics $f$ and the measurement function $g$ in the state-space model $\frac{d \mathbf{x}}{dt} = f(\mathbf{x},\mathbf{u}), \mathbf{y}=g(\mathbf{x},\mathbf{u})$ using neural networks. Not shown in this repository, but illustrated in the documentation example [Neural State-Space Model of Simple Pendulum System](https://www.mathworks.com/help/ident/ug/training-a-neural-state-space-model-for-a-simple-pendulum-system.html).
3. **Universal Differential Equation**: combines known dynamics $g$ with unknown dynamics $h$, for example $\frac{d\mathbf{x}}{dt} = f(\mathbf{x},\mathbf{u}) = g(\mathbf{x},\mathbf{u}) + h(\mathbf{x},\mathbf{u})$, learning the unknown part $h$ using a neural network.
4. **Hamiltonian Neural Network**: learns the system's Hamiltonian $\mathcal{H}$, and accounts for energy conservation by enforcing Hamilton's equations $\frac{dq}{dt} = \frac{\partial \mathcal{H}}{\partial p}, \frac{dp}{dt} = -\frac{\partial \mathcal{H}}{\partial q}$.
- **Discovering governing equations from data:**
1. **SINDy (Sparse Identification of Nonlinear Dynamics)**: learns a mathematical representation of the system dynamics $f$ by performing sparse regression on a library of candidate functions.
- **Solving known ordinary and partial differential equations:**
1. **Physics-Informed Neural Networks**: learns the solution to a differential equation by embedding the governing equations directly into the neural network's loss function.
2. **Fourier Neural Operator**: learns the solution operator that maps input functions (e.g. forcing function) to the solution of a differential equation, utilizing Fourier transforms to parameterize the integral kernel and efficiently capture global dependencies.

## Setup
Open the project file physics-informed-ml-blog-supporting-code.prj to correctly set the path.

## MathWorks Products ([https://www.mathworks.com](https://www.mathworks.com/))
Requires MATLAB&reg; R2024a or newer
- [Deep Learning Toolbox&trade;](https://www.mathworks.com/products/deep-learning.html)

## License
The license is available in the [license.txt](license.txt) file in this Github repository.

## Community Support
[MATLAB Central](https://www.mathworks.com/matlabcentral)

Copyright 2025 The MathWorks, Inc.

Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_1024.mat
Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_128.mat
Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_2048.mat
Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_256.mat
Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_512.mat
Binary file not shown.
Binary file added phiml-blog-supporting-code/data/fno_data_64.mat
Binary file not shown.
Loading