This repository contains the code used by group 8 to test the robustness of the depth from videos in the wild paper. Note that the depth from videos in the wild code is not included in the repository.
The first step before using anything is to run:
git clone https://github.com/robot-love/depth_from_video_in_the_wild.git
to clone the original paper's code into this directory.
This folder contains Nick's code (annotator.m) to generate possibly moving object masks
This folder contains example odometry results. The python code read_odom.py can be run to view the group's replication of the results of the original paper.
This folder contains code for testing the limitations of optical flow. That is, how many frames can be skipped until optical flow breaks down.
This folder contains code from the struct2depth repository for generating training data from the kitti raw dataset. The code in this folder was augmented by group 8 to allow for skipping a specific number of frames.
These two python scripts are set up to be run to inference the depth estimation and odometry estimation networks on a single example. To run code in this repo, first be sure that the depth from videos in the wild repository has been cloned. Then download the following data into the example_model directory: Example Model
You can then inference depth or transformation on the data example provided by simply running either of the python scripts.