Skip to content
/ MAVOS Public

Efficient Video Object Segmentation via Modulated Cross-Attention Memory

License

Notifications You must be signed in to change notification settings

Amshaker/MAVOS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

MAVOS

Efficient Video Object Segmentation via Modulated Cross-Attention Memory

Oryx Video-ChatGPT

Mohamed bin Zayed University of AI, ETH Zurich, University of California - Merced, Yonsei University, Google Research, Linköping University

paper

Latest

  • 2024/03/24: We released our technical report on arxiv. Our code and models are coming soon!

Abstract Recently, transformer-based approaches have shown promising results for semi-supervised video object segmentation. However, these approaches typically struggle on long videos due to increased GPU memory demands, as they frequently expand the memory bank every few frames. We propose a transformer-based approach, named MAVOS, that introduces an optimized and dynamic long-term modulated cross-attention (MCA) memory to model temporal smoothness without requiring frequent memory expansion. The proposed MCA effectively encodes both local and global features at various levels of granularity while efficiently maintaining consistent speed regardless of the video length. Extensive experiments on multiple benchmarks, LVOS, Long-Time Video, and DAVIS 2017, demonstrate the effectiveness of our proposed contributions leading to real-time inference and markedly reduced memory demands without any degradation in segmentation accuracy on long videos. Compared to the best existing transformer-based approach, our MAVOS increases the speed by 7.6x, while significantly reducing the GPU memory by 87% with comparable segmentation performance on short and long video datasets. Notably on the LVOS dataset, our MAVOS achieves a J&F score of 63.3% while operating at 37 frames per second (FPS) on a single V100 GPU. Our code and models will be publicly released.

Intro

  • MAVOS is a transformer-based VOS method that achieves real-time FPS and reduced GPU memory for long videos. We introduce a Modulated Cross-Attention (MCA) memory, designed to efficiently propagate information from past frames to the target. Our approach utilizes a novel fusion operator capable of effectively managing both local and global features across diverse levels of detail.

  • MAVOS increases the speed by 7.6x over the baseline DeAOT, while significantly reducing the GPU memory by 87% on long videos with comparable segmentation performance on short and long video datasets.

Examples


Results on Long videos benchmarks

LVOS val set

LTV

Acknowledgment

The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Alvis partially funded by the Swedish Research Council through grant agreement no. 2022-06725, the LUMI supercomputer hosted by CSC (Finland) and the LUMI consortium, and by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre.

Citation

if you use our work, please consider citing us:

@article{Shaker2024MAVOS,
  title={Efficient Video Object Segmentation via Modulated Cross-Attention Memory},
  author={Shaker, Abdelrahman and Wasim, Syed and Danelljan, Martin and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},
  journal={arXiv:2403.17937},
  year={2024}
}

License

This project is released under the BSD-3-Clause license. See LICENSE for additional details.

About

Efficient Video Object Segmentation via Modulated Cross-Attention Memory

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages