Skip to content
/ XMem Public
forked from hkchengrex/XMem

[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model

License

Notifications You must be signed in to change notification settings

v7labs/XMem

 
 

Repository files navigation

XMEM V7 Fork

To launch training: python -m torch.distributed.launch --master_port 25763 --nproc_per_node=2 train.py --exp_id EXPERIMENT_NAME --stage 3 --load_network saves/XMem-s012.pth

Training data

Original training data has been moved to Dodo at /data/thom/xmem_training_data You configure the dataloaders in train.py in the functions named renew_<datasetname>_loader The format for each "video" is a folder of Annotations (.png images, 2D, 0,1) and a folder of images. For in context learning you would just have 1 video.

About

[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.9%
  • Cuda 2.0%
  • C++ 1.5%
  • Cython 0.4%
  • Dockerfile 0.1%
  • Shell 0.1%