git clone https://github.com/Saint-lsy/Polyp-Gen.git
cd Polyp-Gen
conda create -n PolypGen python=3.10
conda activate PolypGen
pip install -r requirements.txt
This model was trained by LDPolypVideo dataset.
We filtered out some low-quality images with blurry, reflective, and ghosting effects, and finally select 55,883 samples including 29,640 polyp frames and 26,243 non-polyp frames.
Our dataset can be downloaded at here.
The pre-trained model is Stable Diffusion Inpainting-2, availble on Huggingface
You can train your own model using the script:
bash scripts/train.sh
You can download the chekpoints of our Polyp_Gen from here.
python sample_one_image.py
The weight of pretrained DINOv2 can be found here.
The first step is building database and Global Retrieval.
python GlobalRetrieval.py --data_path /path/of/non-polyp/images --database_path /path/to/build/database --image_path /path/of/query/image/
The second step is Local Matching for query image.
python LocalMatching.py --ref_image /path/ref/image --ref_mask /path/ref/mask --query_image /path/query/image --mask_proposal /path/to/save/mask
One Demo of LocalMatching
python LocalMatching.py --ref_image demos/img_1513_neg.jpg --ref_mask demos/mask_1513.jpg --query_image demos/img_1592_neg.jpg --mask_proposal gen_mask.jpg
The third step is using the generated Mask to sample.
The code is based on the following projects. Greatly thanks to these authors!