Skip to content
/ IVM Public
forked from 2toinf/IVM

The offical Implementation of "Instruction-Guided Visual Masking"

License

Notifications You must be signed in to change notification settings

AIR-DI/IVM

This branch is 13 commits behind 2toinf/IVM:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

58df791 · Jun 3, 2024

History

58 Commits
May 31, 2024
May 31, 2024
Jun 3, 2024
Jun 3, 2024
May 31, 2024
Jun 3, 2024
Jun 3, 2024
May 31, 2024
May 3, 2024
May 8, 2024

Repository files navigation

Instruction-Guided Visual Masking

[paper] [project page]

Introduction

We introduce Instruction-guided Visual Masking (IVM), a new versatile visual grounding model that is compatible with diverse multimodal models, such as LMM and robot model. By constructing visual masks for instruction-irrelevant regions, IVM-enhanced multimodal models can effectively focus on task-relevant image regions to better align with complex instructions. Specifically, we design a visual masking data generation pipeline and create an IVM-Mix-1M dataset with 1 million image-instruction pairs. We further introduce a new learning technique, Discriminator Weighted Supervised Learning (DWSL) for preferential IVM training that prioritizes high-quality data samples. Experimental results on generic multimodal tasks such as VQA and embodied robotic control demonstrate the versatility of IVM, which as a plug-and-play tool, significantly boosts the performance of diverse multimodal models.

1716817940241

Duck on green plate | Red cup on red plate | Red cup on red plate | Red cup on silver pan | Red cup on silver pan

Content

Quick Start

Install

  1. Clone this repository and navigate to IVM folder
git clone https://github.com/2toinf/IVM.git
cd IVM
  1. Install Package
conda create -n IVM
pip install -e .

Usage

from IVM import load
from PIL import Image

model = load(ckpt_path, type="lisa", low_gpu_memory = False) # Set `low_gpu_memory=True` if you don't have enough GPU Memory

image = Image.open("assets/demo.jpg")
instruction = "pick up the red cup"

result = model(image, instruction, threshold = 0.5)
'''
result content:
	    'soft': heatmap
            'hard': segmentation map
            'blur_image': rbg
            'highlight_image': rbg
            'cropped_blur_img': rgb
            'cropped_highlight_img': rgb
            'alpha_image': rgba
'''


Model Zoo

Coming Soon

Evaluation

VQA-type benchmarks

Coming Soon

Real-Robot

Policy Learning: https://github.com/Facebear-ljx/BearRobot

Robot Infrastructure: https://github.com/rail-berkeley/bridge_data_robot

Dataset

Coming Soon

Acknowledgement

This work is built upon the LLaVA and SAM. And we borrow ideas from LISA

Citation

@misc{zheng2024instructionguided,
            title={Instruction-Guided Visual Masking}, 
            author={Jinliang Zheng and Jianxiong Li and Sijie Cheng and Yinan Zheng and Jiaming Li and Jihao Liu and Yu Liu and Jingjing Liu and Xianyuan Zhan},
            year={2024},
            eprint={2405.19783},
            archivePrefix={arXiv},
            primaryClass={cs.CV}
        }
  

About

The offical Implementation of "Instruction-Guided Visual Masking"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 80.0%
  • Python 15.8%
  • HTML 3.3%
  • CSS 0.9%