Skip to content

Supporting Kits for generating waymo annotation files suitable for PIEPredict. Some files could still need manual data insertions.

License

Notifications You must be signed in to change notification settings

eyuell/waymo-pie-annotation

Repository files navigation

Data Extractor from Waymo Open Dataset and Sample Annotations for use with PIEPredict

This repository holds supplementary codes (kits) and documents for the Implementation of PIEPredict on Waymo Open Dataset.

Table of contents

Preparation

The executable kits are designated by files which start with number. The numbers show the sequence they might be executed. Especially kit number 2 requires the execution of kit number 1 to have the necessary files. The source and destination folders shall be supplied correctly inside the code. The segments of Waymo Open Dataset (WOD) that end with .tfrecord shall be located in an input folder.

Step 1: Extract Labels and Images

From the data_extractor folder, the 1extract_image_label.py file needs an input of a main_path which could hold the input/segment and output folders of the kit. Alternatively, the segments_dir and output_dir paths can be supplied.

Then the command python 1extract_image_label.py can be executed and the images and labels for each segments shall be populated in the camera/images and camera/labels folders respectively.

Step 2A: Generate Pedestrian Annotations

The annotation files that will be related with annotation and annotation_attributes folders of the PIE implementation can be generated by executing the command python 2fetch_ped_data.py. The output folder of the Step 1 will be used as input of this Step 2. After reading information from the camera/labels/ folder, the resulting annotations will be saved in the annot_files folder.

These annotations have pedestrians information that have the pedestrian id, frame numbers, and bounding box informations. The other information is expected to be manually filled by studying each pedestrian in each frame. To get the compiled images together with the id of each pedestrian, Step 2B can be used.

Step 2A: Generate Pedestrian Annotations and Identify Pedestrians

While generating the pedestrian annotations, if there is an interest to compile new images that holds the identification of each pedestrian in each frames, a positive number can be provided at the last part of the command line as python 2fetch_ped_data.py 1. Hence, the images will be located in the compiled_imgs folder.

Step 3: Generate Vehicle Movement information (OBD)

From the scope of predicting pedestrian trajectory using PIE model, the annotation that is related with annotation_vehicle folder can be generated with the command python 3waymo_OBD_extractor.py. The file setup shall be the same as in Step 1 such that the segment file will be read from input/segmentand the resulting annotation will be saved in annot_files folder.

Additional kit

The last kit in the ped_bounding_box_area folder can be used if there is an interest to calculate the area of pedestrian's bounding boxes for each segment. It will display the area for each segment and the summed total. This kit need the location of the annotations folder as an input. for execution, use python compute_bb_area.py command.

Sample Annotations

As a sample annotation, there are six video annotations. The sample annotations are grouped for training, validating and testing using three folders wod1tr, wod2va, and wod3te respectively for each of the annotation types. The images from waymo open dataset shall also be distributed in similar folder names. The sample annotations are sample_annotation folder. The segment names associated with the annotations are listed in segments_list file.

Corresponding author

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

About

Supporting Kits for generating waymo annotation files suitable for PIEPredict. Some files could still need manual data insertions.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages