Github page to paper: AudioIMU: Enhancing Inertial Sensing-Based Activity Recognition with Acoustic Models (to appear at ACM ISWC 2022)
Derive DeepConvLSTM activity recognition models based on IMU inputs only: lab_motion_train.py
Train and evaluate the teacher model 1 (audio inputs): lab_audio_train.py
Train and evaluate the teacher model 2 (audio + IMU inputs): lab_multimodal_train.py
Train and evaluate the student models with 15 participants: joint_trainfixlr_loso_individual.py
If you want to do a parameter search for your own setting (especially if you experiment with a new model architecture or your own data), you can do something similar to script: main_args_individuals.py
If you just want to run inference for the participants' data based on our developed models, you can do something similar to script: sample_inference.ipynb
====
All the model architectures and FFT functions are wrapped up in models.py
Weights of our tested models can be accessed at: https://doi.org/10.18738/T8/S5RTFH. The data is of name: rawAudioSegmentedData_window_10_hop_0.5_Test_NEW.pkl