This is the official implementation of Segmentation tracking and clustering system enables accurate multi-animal tracking of social behaviors.
What's new
- segTracker.ai now supports model training!
- segTracker.ai also supports image annotation!
- Refectored a lot of codes and re-organized submodules with OpenMMLab tools!
Here are some videos showcasing the pose tracking performance (tested on Lenovo Thinkbook16p 2021 with 16GB RAM and RTX3060 MaxQ)!
The following videos are tracked with an unsupervised method which is animal-agnostic.
Name | Size(MB) | Duration | Resolution | FPS | Recording Device | idTracker.ai V4.0.12 time | segTracker.ai time | Bilibili |
---|---|---|---|---|---|---|---|---|
4xWT | 208 | 01:00:00 | 648 x 724 | 30 | Smartisan OD103 | NA | 3:27:08 | link |
6xWT | 344 | 00:10:02 | 1080 x 1080 | 60 | iPhone 13 Plus | NA | 3:03:07 | link |
2xWT | 157 | 00:30:10 | 608 x 864 | 30 | iPhone 13 | NA | 0:51:42 | link |
4xPD | 212 | 00:10:24 | 820 x 800 | 15 | S-YUE Webcam | 2:59:45 | 0:38:40 | link |
Noduls | 10.5 | 00:10:00 | 236 x 184 | 25 | Noduls camera | 5:08:22 | 0:53:57 | link |
14ants | 109 | 00:12:57 | 840 x 714 | 59.94 | - | 6:27:35 | 1:51:18 | link |
10flies | 32.5 | 00:10:12 | 948 x 920 | 60 | - | 1d, 6:03:28 | 2:09:06 | link |
2xWT | 4xWT | 6xWT |
---|---|---|
Noduls | 4xPD | Fly |
Ant | New video | sleeping |
We did not use any of the frames in the 'New video' (link) to train our instance segmentation and pose model.
We also added a 'sleeping.mp4' video in which all animals are stationary to test our algorithm (link)
- Clone this repo with
git clone --recurse-submodules https://github.com/tctco/STCS.git
to your computer andcd STCS
- Download the trained models from the release page and place them inside
STCS/backend/trained_models
- Build & run the docker container by
docker compose up
(this container is quite large ~30GB, I'm sorry) - Open your browser and go to
localhost
(you can access segTracker.ai via WAN/LAN by your hosting IP as well) - Convert your video to a codec supported by Chrome Edge browser (we only tested H264)
- Upload the video and start tracking!
Check out this video demo!
segTracker.demo.-.Made.with.Clipchamp.mp4
There are also some other APIs you might be interested:
localhost/rq
: the rq-dashboard API, the default password/username isadmin
. Modify this in thedocker-compose.yaml
file.localhost/api/docs
: the backend APIs with swagger UI.
Check here to learn how to use STGCN-Autoencoder for learning social behavior embeddings weakly-supervised clustering. Please download segCluster_data
from the release page and place it under segCluster/segCluster_data
Check here for the reproduction of some of the figures code to reproduce figures. Please download data_and_fig
from the release page and place it under reproduce_figures/data_and_fig
.
Figshare:
https://doi.org/10.6084/m9.figshare.25341859.v1
https://doi.org/10.6084/m9.figshare.25341856.v1
https://doi.org/10.6084/m9.figshare.25594242.v1
It takes centuries to upload files to Figshare... Therefore I have to remove some raw experimental videos from supplementary files (they are not very important). If you wish to analyze the raw videos from scratch, please refer to the Baidu netdisk links below.
Baidu netdisk
segTracker_supp_materials: 链接/link:https://pan.baidu.com/s/1L9ix8R9n1PEDXCe2ju_X5w 提取码/extraction code:e7nn
segCluster_supp_materials 链接/link:https://pan.baidu.com/s/1QdXSHxzjdXpncsxayEH89g 提取码/extraction code:mq3y
We thank OpenMMLab, idTracker.ai, DeepLabCut, SLEAP, TRex/TGrab, SeBA, OCSORT, ByteTrack, YOLOv8... (there are too many to list here, I'm sorry) team for developing wonderful opensource software!
I also thank Professor Li Hao for providing generous help and Ph.D. Lai Chuan for providing his experiment videos.
@Article{STCS,
author={Tang, Cheng
and Zhou, Yang
and Zhao, Shuaizhu
and Xie, Mingshu
and Zhang, Ruizhe
and Long, Xiaoyan
and Zhu, Lingqiang
and Lu, Youming
and Ma, Guangzhi
and Li, Hao},
title={Segmentation tracking and clustering system enables accurate multi-animal tracking of social behaviors},
journal={Patterns},
year={2024},
month={2024/09/14},
publisher={Elsevier},
issn={2666-3899},
doi={10.1016/j.patter.2024.101057},
url={https://doi.org/10.1016/j.patter.2024.101057}
}