Intermediate frame interpolation using optical flow with FlowNet2
Explore the repository»
View Problem Statement
View Report
tags : frame interpolation, optical flow, flownet2, digital video, deep learning, pytorch
This project deals with the task of video frame interpolation with estimated optical flow. In particular, we estimate the forward optical flow (flow from Frame N to Frame N + 2) and the backward flow (flow from Frame N + 2 to Frame N) and use both of them to estimate the intermediate Frame N. To estimate the optical flow we use pre-trained FlowNet2 deep learning model and experiment by fine-tuning it. We explore the interpolation performance on Spheres dataset and Corridor dataset. We observe that the quality of interpolated frames is comparable to original with both the datasets. A detailed description of interpolation algorithms, loss functions, analysis of the results are available in the Report.
Note: flownet folder contains code modified from NVIDIA FlowNet2 Repository and FlowNet2 PyTorch Wrapper. Download the pre-trained models and put it in ./flownet2/pretrained_models
folder.
This project was built with
- python v3.7
- pytorch v1.0.0
- The environment used for developing this project is available at environment.yml.
Clone the repository into a local machine using
git clone https://github.com/vineeths96/Video-Interpolation-using-Deep-Optical-Flow
Create a new conda environment and install all the libraries by running the following command
conda env create -f environment.yml
The dataset used in this project is already available in this repository. To test on other datasets, download them and put them in the input/
folder.
We explore with pre-trained FlowNet2 model from NVIDIA and experiment by fine-tuning it.
To interpolate the frame with the pretrained FlowNet2 model, run the following command. This will interpolate the intermediate frames and store it in this folder.
python pretrained_interpolation.py
To interpolate the frame with the pretrained FlowNet2 model, run the following command. Set the parameters for fine-tuning in the parameters file. This will interpolate the intermediate frames and store it in this folder.
python finetuned_interpolation.py
Note that the GIFs below might not be in sync depending on the network quality. Clone the repository to your local machine and open them locally to see them in sync.
A detailed description of algorithms and analysis of the results are available in the Report.
The plots below shows the estimated optical flow for the datasets with the pre-trained model and the fine-tuned model. We can see that there are no significant change in the estimated optical flow between the two methods.
Corridor Dataset | Pre-Trained Optical Flow | Fine-Tuned Optical Flow |
---|---|---|
Sphere Dataset | Pre-Trained Optical Flow | Fine-Tuned Optical Flow |
---|---|---|
The plots below shows the interpolated frames for the datasets with the pre-trained model and the fine-tuned model. We can see that there is no significant change in quality of interpolated frames between the two methods.
Corridor Dataset Ground Truth | Pre-Trained Interpolated Frame | Fine-Tuned Interpolated Frame |
---|---|---|
Sphere Dataset Ground Truth | Pre-Trained Interpolated Frame | Fine-Tuned Interpolated Frame |
---|---|---|
Distributed under the MIT License. See LICENSE
for more information.
Vineeth S - [email protected]
Project Link: https://github.com/vineeths96/Video-Interpolation-using-Deep-Optical-Flow
-
Fitsum Reda et al. flownet2-pytorch: Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. https://github.com/NVIDIA/flownet2-pytorch . 2017.