Skip to content

Latest commit

 

History

History
48 lines (34 loc) · 1.46 KB

File metadata and controls

48 lines (34 loc) · 1.46 KB

Sam Based Video Segmentation and Logger

Prerequisites

Required python libraries in requirements.txt.

Can install them with pip install or better yet uv pip install

uv pip install -r requirements.txt

Download the SAM 2.1 checkpoint from here and place it in checkpoints/

Usage

  1. Place your video in data/ (multiple videos are not supported)
  2. Generate masks from the notebook and name the masks. Example masks are stored in notebooks/config/masks/ which you will have to rename
  3. Create a config file in config/ (see config/example_config.json for an example or modify the example config which is not recommended btw.)
  4. Run the script
python process_video.py config/example_config.json --debug-output output/debug_frame.png

Example output: with the video 76-86 (2).mp4 we get the following output:

Saved 153 interaction events to output/interaction_log.csv
Total contact time per mouse/object:
  mouse_left + left_bottom: 64.23 s
  mouse_left + left_top: 69.89 s
  mouse_right + right_bottom: 85.06 s
  mouse_right + right_top: 25.33 s
Saved debug visualization to output/debug_frame.png

CSV outputs are in: output/interaction_log.csv with comma delimited values.

TODOs

  • Add a GUI for the config file
  • Add a GUI for mask generation
  • Multiple video support
  • Improve mouse interaction tracking

License

MIT