Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Docs and demos #11

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
using Documenter, ImageTracking

makedocs()
34 changes: 34 additions & 0 deletions docs/src/function_reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Optical Flow

## Main Functions

```@docs
optical_flow
```

## Other Functions

```@docs
polynomial_expansion
```

## Optical Flow Algorithms

```@docs
LK
Farneback
```

# Haar like features

## Main Functions

```@docs
haar_features
```

## Other Functions

```@docs
haar_coordinates
```
20 changes: 20 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# ImageTracking.jl

## Introduction

[ImageTracking](https://github.com/JuliaImages/ImageTracking.jl) is a Julia package for optical flow and
object tracking algorithms as part of the JuliaImages ecosystem.

Optical flow and object tracking algorithms find applications in a variety of fields ranging from human-computer interaction,
security and surveillance, video communication and compression, augmented reality, traffic control, robotics, medical imaging
and video editing.

## Installation

Installing the package is extremely easy with julia's package manager -

```julia
Pkg.add("ImageTracking.jl")
```

ImageTracking.jl requires [Images.jl](https://github.com/JuliaImages/Images.jl), [ImageFiltering](https://github.com/JuliaImages/ImageFiltering.jl) and [Interpolations.jl](https://github.com/JuliaMath/Interpolations.jl)
65 changes: 65 additions & 0 deletions docs/src/tutorials/farneback.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Farneback Dense Optical Flow

A method for dense optical flow estimation developed by Gunnar Farneback. It
computes the optical flow for all the points in the frame using the polynomial
representation of the images. The idea of polynomial expansion is to approximate
the neighbourhood of a point in a 2D function with a polynomial. Displacement
fields are estimated from the polynomial coefficients depending on how the
polynomial transforms under translation.

The different arguments are:

- flow_est = Array of SVector{2} containing estimate flow values for all points in the frame
- iterations = Number of iterations the displacement estimation algorithm is run at each
point
- window_size = Size of the search window at each pyramid level; the total size of the
window used is 2*window_size + 1
- σw = Standard deviation of the Gaussian weighting filter
- neighbourhood = size of the pixel neighbourhood used to find polynomial expansion for each pixel;
larger values mean that the image will be approximated with smoother surfaces,
yielding more robust algorithm and more blurred motion field
- σp = standard deviation of the Gaussian that is used to smooth derivatives used as a
basis for the polynomial expansion (Applicability)
- est_flag = true -> Use flow_est as initial estimate
false -> Assume zero initial flow values
- gauss_flag = false -> use box filter
true -> use gaussian filter instead of box filter of the same size for optical flow
estimation; usually, this option gives more accurate flow than with a box filter,
at the cost of lower speed (Default Value)

## References

Farnebäck G. (2003) Two-Frame Motion Estimation Based on Polynomial Expansion. In: Bigun J.,
Gustavsson T. (eds) Image Analysis. SCIA 2003. Lecture Notes in Computer Science, vol 2749. Springer, Berlin,
Heidelberg

Farnebäck, G.: Polynomial Expansion for Orientation and Motion Estimation. PhD thesis, Linköping University,
Sweden, SE-581 83 Linköping, Sweden (2002) Dissertation No 790, ISBN 91-7373-475-6.

## Example

In this example we will try to find the optical flow for all the points between an image and its shifted image.

First we load the image and create the shifted image by shifting it by `5` pixels in the `y` direction (vertically down) and `3` pixels
in the `x` direction (horizontally right).

```@example 1
using ImageTracking, TestImages, Images, StaticArrays

img1 = Gray{Float64}.(testimage("mandrill"))
img2 = similar(img1)
for i = 6:size(img1)[1]
for j = 4:size(img1)[2]
img2[i,j] = img1[i-5,j-3]
end
end
```

Now we calculate the flow for all the points in the image using the `Farneback` algorithm.

```@example 1
fb = Farneback(rand(SVector{2,Float64},2,2), 7, 39, 6.0, 11, 1.5, false, true)
flow = optical_flow(img1, img2, fb)
```

The output Vector contain the flow values for each of the points in the image.
17 changes: 17 additions & 0 deletions docs/src/tutorials/haar.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Haar-like features

Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar
wavelets and were used in the first real-time face detector developed by Viola and Jones.

A simple rectangular Haar-like feature can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position
and scale within the original image. This modified feature set is called 2-rectangle feature. Viola and Jones also defined 3-rectangle features and
4-rectangle features. The values indicate certain characteristics of a particular area of the image. Each feature type can indicate the existence (or
absence) of certain characteristics in the image, such as edges or changes in texture. For example, a 2-rectangle feature can indicate where the border
lies between a dark region and a light region.

The ImageTracking package houses two function related to haar-like features:

`haar_features` - Returns an array containing the Haar-like features for the given `Integral Image` in the region specified by the points `top_left`
and `bottom_right`.
`haar_coordinates` - Returns an array containing the `coordinates` of all the possible Haar-like features of the specified type in any region of given
`height` and `width`.
65 changes: 65 additions & 0 deletions docs/src/tutorials/lucas_kanade.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Lucas - Kanade Sparse Optical Flow

A differential method for optical flow estimation developed by Bruce D. Lucas
and Takeo Kanade. It assumes that the flow is essentially constant in a local
neighbourhood of the pixel under consideration, and solves the basic optical flow
equations for all the pixels in that neighbourhood, by the least squares criterion.

The different arguments are:

- prev_points = Vector of SVector{2} for which the flow needs to be found
- next_points = Vector of SVector{2} containing initial estimates of new positions of
input features in next image
- window_size = Size of the search window at each pyramid level; the total size of the
window used is 2*window_size + 1
- max_level = 0-based maximal pyramid level number; if set to 0, pyramids are not used
(single level), if set to 1, two levels are used, and so on
- estimate_flag = true -> Use next_points as initial estimate
false -> Copy prev_points to next_points and use as estimate
- term_condition = The termination criteria of the iterative search algorithm i.e the number of iterations
- min_eigen_thresh = The algorithm calculates the minimum eigenvalue of a (2 x 2) normal matrix of optical
flow equations, divided by number of pixels in a window; if this value is less than
min_eigen_thresh, then a corresponding feature is filtered out and its flow is not processed
(Default value is 10^-6)

## References

B. D. Lucas, & Kanade. "An Interative Image Registration Technique with an Application to Stereo Vision,"
DARPA Image Understanding Workshop, pp 121-130, 1981.

J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanadefeature tracker description of the
algorithm,” Intel Corporation, vol. 5,no. 1-10, p. 4, 2001.

## Example

In this example we will try to find the optical flow for a few corner points `(Shi Tomasi)` between an image and its shifted image.

First we load the image and create the shifted image by shifting it by `5` pixels in the `y` direction (vertically down) and `3` pixels
in the `x` direction (horizontally right).

```@example 1
using ImageTracking, TestImages, Images, StaticArrays, OffsetArrays

img1 = Gray{Float64}.(testimage("mandrill"))
img2 = OffsetArray(img1, 5, 3)
```

Now we find corners in the image using the `imcorner` function from `Images.jl` since these corners have distinctive properties and are
much easier for the flow algorithm to track.

```@example 1
corners = imcorner(img1, method=shi_tomasi)
y, x = findn(corners)
a = map((yi, xi) -> SVector{2}(yi, xi), y, x)
```

We have collected all the corners in the Vector `a`. Next we take `200` of these points and find the flow for them.

```@example 1
pts = rand(a, (200,))

lk = LK(pts, [SVector{2}(0.0,0.0)], 11, 4, false, 20)
flow, status, err = optical_flow(img1, img2, lk)
```

The three output Vectors contain the flow values, status and error for each of the points in `pts`.
21 changes: 21 additions & 0 deletions docs/src/tutorials/optical_flow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Optical Flow

Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the relative motion between the
object and camera. It is `2D vector field` where each vector is a displacement vector showing the movement of points from first frame to
second.

By estimating optical flow between video frames, one can measure the velocities of objects in the video. In general, moving objects that
are closer to the camera will display more apparent motion than distant objects that are moving at the same speed. Optical flow
estimation is used in computer vision to characterize and quantify the motion of objects in a video stream, often for motion-based object
detection and tracking systems.

The ImageTracking package currently houses the following optical flow algorithms:

`Lucas - Kanade`
`Farneback`

The API for optical flow calculation is as follow:

`optical_flow(prev_image, next_image, optical_flow_algo)`

Depending on which optical flow algorithm is used, different outputs are returned by the function.