You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once built, you can execute the assignment from inside the build/ by running
on a given mesh with given scalar field (in
dmat format).
./smoothing [path to mesh.obj] [path to data.dmat]
or to load a mesh with phony noisy data use:
./smoothing [path to mesh.obj] n
or to load a mesh with smooth z-values as data (for mesh smoothing only):
./smoothing [path to mesh.obj]
Background
In this assignment we will explore how to smooth a data signal defined over a
curved surface. The data signal may be a scalar field defined on a static
surface: for example, noisy temperatures on the surface of an airplane.
Smoothing in this context can be understood as data
denoising:
The signal could also be the geometric positions of the surface itself. In
this context, smoothing acts also affects the underlying geometry of the
domain. We can understand this operation as surface
fairing:
Flow-based formulation
In both cases, the underlying mathematics of both operations will be very
similar. If we think of the signal as undergoing a flow toward a smooth
solution over some phony notion of "time", then the governing partial
differential equation we will start with sets the change in signal value $u$
over time proportional
to the
Laplacian of the signal $\Delta u$
(for now, roughly the second derivative of the signal as we move on the
surface):
$$
\frac{\partial u}{\partial t} = {\lambda} \Delta u,
$$
where the scalar parameter ${\lambda}$ controls the rate of smoothing.
When the signal is the surface geometry, we call this a geometric
flow.
There are various ways to motivate this choice of flow for
data-/geometry-smoothing. Let us consider one way that will introduce the
Laplacian as a form of local averaging.
Given a noisy signal $f$, intuitively we can smooth$f$ by averaging every
value with its neighbors' values. In continuous math, we might write that the
smoothed value $u(\mathbf{x})$ at any point times the volume of a small neighboring ball on
our surface $\mathbf{x} \in \mathbf{S}$ should be equal to
the average value of the ball:
If the ball $B(\mathbf{x})$ is small, then we will have to repeat this averaging many
times to see a global smoothing effect. Hence, we can write that the current
value $u^t$flows toward smooth solution by small steps ${\delta}t$ in time:
Dividing both sides by $B(\mathbf{x})$; subtracting the current value $u^t(\mathbf{x})$ from both sides, and introducing a
flow-speed parameter ${\lambda}$ we have a flow equation
describing the change in value as an integral of relative values:
For harmonic functions, $\Delta u =0$, this integral becomes zero via satisfaction of the
mean value
theorem.
It follows for a non-harmonic $\Delta u \ne 0$ this integral is equal to the Laplacian
of the $u$, so we have arrived at our flow equation:
Alternatively, we can think of a single smoothing operation as the solution to
an energy minimization problem. If $f$ is our noisy signal over the surface,
then we want to find a signal $u$ such that it simultaneously minimizes its
difference with $f$ and minimizes its variation over the surface:
where again the scalar parameter ${\lambda}$ controls the rate of smoothing. This
energy-based formulation is equivalent to the flow-based formulation.
Minimizing these energies is identical to stepping forward one temporal unit in
the flow.
Calculus of variations
In the smooth setting, our energy $E$ is a function that measures scalar value
of a given function u, making it a
functional. To
understand how to minimize a functional with respect to an unknown function,
we will need concepts from the calculus of
variations.
We are used to working with minimizing quadratic functions with respect to a
discrete set of variables, where the minimum is obtained when the gradient of
the energy with respect to the variables is zero.
In our case, the functional $E(u)$ is quadratic in $u$ (recall that the
gradient operator${\nabla}$ is a linear
operator). The function $u$ that minimizes $E(u)$ will be obtained when any
small change or variation in $u$ has no change on the energy values. To
create a small change in a function $u$ we will add another function $v$ times
a infinitesimal scalar ${\epsilon}$. If $E(u)$ is minimized for a function $w$ and we
are given another arbitrary function $v$, then let us define a function new
function
where we observe that ${\phi}$ is quadratic in ${\epsilon}$.
Since $E(w)$ is minimal then ${\phi}$ is minimized when ${\epsilon}$ is zero, and if ${\phi}$ is
minimal at ${\epsilon}=0$, then the derivative of ${\phi}$ with respect ${\epsilon}$ must be zero:
The choice of "test" function $v$ was arbitrary, so this must hold for any
(reasonable) choice of $v$:
$$
0 = \int _\mathbf{S} (v(w-f) + {\lambda}{\nabla}v\cdot {\nabla}w) \, dA \quad \forall v.
$$
It is difficult to claim much about $w$ from this equation directly because
derivatives of $v$ are still involved. We can move a derivative from $v$ to a
$w$ by applying Green's first
identity:
$$
0 = \int _\mathbf{S} (v(w-f) - {\lambda}v\Delta w )\, dA \quad (+ \text{boundary term} )\quad \forall v,
$$
where we choose to ignore the boundary terms (for now) or equivalently we
agree to work on closed surfaces $\mathbf{S}$.
Since this equality must hold of any$v$ let us consider functions that are
little "blips" centered at any
arbitrary point $\mathbf{x} \in \mathbf{S}$. A function $v$ that is one at $\mathbf{x}$ and quickly
decays to zero everywhere else. To satisfy the equation above at $\mathbf{x}$ with this
blip $v$ we must have that:
The choice of $\mathbf{x}$ was arbitrary so this must hold everywhere.
Because we invoke variations to arrive at this equation, we call the
energy-based formulation a variational formulation.
Implicit smoothing iteration
Now we understand that the flow-based formulation and the variational
formulation lead to the same system, let us concretely write out the implicit
smoothing step.
Letting $u^0 = f$ we compute a new smoothed function $u^{t+1}$ given the
current solution $u^t$ by solving the linear system of equations:
"Computing Discrete Minimal Surfaces and Their Conjugates" [Pinkall &
Polthier 1993]
All of these techniques will produce the same sparse Laplacian matrix$\mathbf{L} \in \mathbb{R}^{n\times n}$
for a mesh with $n$ vertices.
Finite element derivation of the discrete Laplacian
We want to approximate the Laplacian of a function $\Delta u$. Let us consider $u$ to
be piecewise-linear
represented by scalar values at each vertex, collected in $\mathbf{u} \in \mathbb{R}^n$.
Any piecewise-linear function can be expressed as a sum of values at mesh
vertices times corresponding piecewise-linear basis functions (a.k.a hat
functions, ${\varphi}_i$):
By defining ${\varphi}_i$ as piecewise-linear hat functions, the values in the system
matrix $L_{ij}$ are uniquely determined by the geometry of the underlying mesh.
These values are famously known as cotangent weights. "Cotangent"
because, as we will shortly see, of their trigonometric formulae and "weights"
because as a matrix $\mathbf{L}$ they define a weighted graph
Laplacian for the given mesh.
Graph Laplacians are employed often in geometry processing, and often in
discrete contexts ostensibly disconnected from FEM. The choice or manipulation
of Laplacian weights and subsequent use as a discrete Laplace operator has been
a point of controversy in geometry processing research (see "Discrete laplace
operators: no free lunch" [Wardetzky et al. 2007]).
We first notice that ${\nabla}{\varphi}_i$ are constant on each triangle, and only nonzero on
triangles incident on node $i$. For such a triangle, $T_{\alpha}$, this ${\nabla}{\varphi}_i$ points
perpendicularly from the opposite edge $e_i$ with inverse magnitude equal to
the height $h$ of the triangle treating that opposite edge as base:
where $\mathbf{e}_i$ is the edge $e_i$ as a vector and $A$ is the area of the triangle.
Now, consider two neighboring nodes $i$ and $j$, connected by some edge
$\mathbf{e}_{ij}$. Then ${\nabla}{\varphi}_i$ points toward node $i$ perpendicular to $\mathbf{e}_i$ and
likewise ${\nabla} {\varphi}_j$ points toward node $j$ perpendicular to $\mathbf{e}_j$. Call the angle
formed between these two vectors ${\theta}$. So we may write:
where $h_i$ is the height of the triangle treating $\mathbf{e}_i$ as base, and
likewise for $h_j$. Rewriting the equation above, replacing one of the edge norms,
e.g. $||\mathbf{e}_i||$:
Recall that ${\varphi}_i$ and ${\varphi}_j$ are only both nonzero inside these two
triangles, $T_{\alpha}$ and $T_{\beta}$. So, since these constants are inside an
integral over area we may write:
Treated as an operator (i.e., when used multiplied against a vector $\mathbf{L}\mathbf{u}$),
the Laplacian matrix $\mathbf{L}$ computes the local integral of the Laplacian of a
function $u$. In the energy-based formulation of the smoothing problem this is
not an issue. If we used a similar FEM derivation for the data term we would
get another sparse matrix $\mathbf{M} \in \mathbb{R}^{n \times n}$:
If we start directly with the continuous smoothing iteration equation, then we
have a point-wise equality. To fit in our integrated Laplacian $\mathbf{L}$ we should
convert it to a point-wise quantity. From a units perspective, we need to
divide by the local area. This would result in a discrete smoothing iteration
equation:
where $\mathbf{I} \in \mathbb{R}^{n\times n}$ is the identity matrix. This equation is correct but
the resulting matrix $\mathbf{A} := \mathbf{I} - {\lambda}\mathbf{M}^{-1} \mathbf{L}$ is not symmetric and thus slower
to solve against.
Instead, we could take the healthier view of requiring our smoothing iteration
equation to hold in a locally integrated sense. In this case, we replace mass
matrices on either side:
Now the system matrix $\mathbf{A} := \mathbf{M} + {\lambda}\mathbf{L}$ will be symmetric and we can use
Cholesky factorization
to solve with it.
Laplace Operator is Intrinsic
The discrete Laplacian operator and its accompanying mass matrix are
intrinsic operators in the sense that they only depend on lengths. In
practical terms, this means we do not need to know where vertices are
actually positioned in space (i.e., $\mathbf{V}$). Rather we only need to know the
relative distances between neighboring vertices (i.e., edge lengths). We do not
even need to know which dimension this mesh is living
in.
This also means that applying a transformation to a shape that does not change
any lengths on the surface (e.g., bending a sheet of paper) will have no affect
on the Laplacian.
Data denoising
For the data denoising application, our geometry of the domain is not changing
only the scalar function living upon it. We can build our discrete Laplacian
$\mathbf{L}$ and mass matrix $\mathbf{M}$ and apply the above formula with a chosen ${\lambda}$
parameter.
Geometric smoothing
For geometric smoothing, the Laplacian operator (both $\Delta $ in the continuous
setting and $\mathbf{L},\mathbf{M}$ in the discrete setting) depend on the geometry of the
surface $\mathbf{S}$. So if the signal $u$ is replaced with the positions of points on
the surface (say, $\mathbf{V} \in \mathbb{R}^{n\times 3}$ in the discrete case), then the smoothing
iteration update rule is a non-linear function if we write it as:
However, if we assume that small changes in $\mathbf{V}$ have a negligible effect on
$\mathbf{L}$ and $\mathbf{M}$ then we can discretize explicitly by computing $\mathbf{L}$ and $\mathbf{M}$before performing the update:
Repeated application of geometric smoothing may cause the mesh to "disappear".
Actually the updated vertex values are being set to
NaNs due to degenerate numerics. We are
rebuilding the discrete Laplacian at every new iteration, regardless of the
"quality" of the mesh's triangles. In particular, if a triangle tends to become
skinnier and skinnier during smoothing, what will happen to the cotangents of
its angles?
In "Can Mean-Curvature Flow Be Made Non-Singular?", Kazhdan et al. derive a new
type of geometric flow that is stable (so long as the mesh at time $t=0$ is
reasonable). Their change is remarkably simple: do not update $\mathbf{L}$, only update
$\mathbf{M}$.
Tasks
Learn an alternative derivation of cotangent Laplacian
The "cotangent Laplacian" by far the most important tool in geometry
processing. It comes up everywhere. It is important to understand where it
comes from and be able to derive it (in one way or another).
The background section above contains a FEM derivation of the discrete
"cotangnet Laplacian". For this (unmarked) task, read and understand one of the
other derivations listed above.
Hint: The finite-volume method used in [Botsch et al. 2010] is perhaps
the most accessible alternative.
White list
igl::doublearea
igl::edge_lengths
Black list
igl::cotmatrix_entries
igl::cotmatrix
igl::massmatrix
Trig functions sin, cos, tan etc. (e.g., from #include <cmath>)
See background notes about "intrinisic"-ness
src/cotmatrix.cpp
Construct the "cotangent Laplacian" for a mesh with edge lengths l. Each
entry in the output sparse, symmetric matrix L is given by: