Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time
This repository contains the code published alongside with our SIGGRAPH Asia paper. It is organised as follows: train_and_eval
contains code to train and evaluate the neural networks proposed in the paper. live_demo
contains Unity and Python scripts to use the models for real-time inference. data_synthesis
contains a script to produce synthetic IMU measurements from SMPL sequences. Please refer to the READMEs in the respective subfolder for more details.
To download the data please visit the project page. From this page you can also download the SMPL reference parameters for the TotalCapture dataset. To preprocess TotalCapture data, please refer to read_TC_data.py
.
Apart from the live demo, this repository does not offer any other visualization tools. However, the data can easily be visualized with the aitviewer. The examples provided by aitviewer contain two scripts that load data associated with DIP:
- Loading ground-truth SMPL poses and IMUs from the DIP-IMU dataset.
- Loading ground-truth SMPL poses and IMUs from the TotalCapture dataset.
For questions or problems please file an issue or contact [email protected] or [email protected].
If you use this code or data for your own work, please cite:
@article{DIP:SIGGRAPHAsia:2018,
title = {Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time},
author = {Huang, Yinghao and Kaufmann, Manuel and Aksan, Emre and Black, Michael J. and Hilliges, Otmar and Pons-Moll, Gerard},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
volume = {37},
pages = {185:1-185:15},
publisher = {ACM},
month = nov,
year = {2018},
note = {First two authors contributed equally},
month_numeric = {11}
}