Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding new zero-shot examples #32483

Open
wants to merge 80 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 65 commits
Commits
Show all changes
80 commits
Select commit Hold shift + click to select a range
3b2c56b
Starting to fix GroundingDinoLoss and GroundingDinoHungarianMatcher
EduardoPach Jun 25, 2024
886e275
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Jun 25, 2024
3b328c0
More updates
EduardoPach Jun 30, 2024
3b84fa7
More updates
EduardoPach Jul 3, 2024
ce59ba7
fixed: GroundingDinoLoss
EduardoPach Jul 13, 2024
d66567d
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Jul 13, 2024
261305d
fixed: failing tests
EduardoPach Jul 13, 2024
51c201a
fix typo
SangbumChoi Jul 15, 2024
9dd38e2
uniform kwargs
SangbumChoi Jul 15, 2024
ebc3862
make style
SangbumChoi Jul 15, 2024
c6dc445
add comments
SangbumChoi Jul 15, 2024
16ddefd
remove return_tensors
SangbumChoi Jul 16, 2024
f956065
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
d06206e
Update tests/models/grounding_dino/test_modeling_grounding_dino.py
EduardoPach Jul 19, 2024
8f1ffc6
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
61e7658
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
5d356fc
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
0860f3b
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
07048ad
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
1f81e13
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
a20eea8
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
cbe6ea8
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 19, 2024
1f9a0ee
remove common_kwargs from processor since it propagates
SangbumChoi Jul 23, 2024
0696dcf
make style
SangbumChoi Jul 23, 2024
850b9d5
return_token_type_ids to True
SangbumChoi Jul 23, 2024
7108900
Addressed comments
EduardoPach Jul 23, 2024
c96c02b
revert the default imagekwargs since does not accept any value in the…
SangbumChoi Jul 24, 2024
8cff6b6
revert processing_utils.py
SangbumChoi Jul 24, 2024
bb1f18b
make style
SangbumChoi Jul 24, 2024
a476c6e
add molbap's commit
SangbumChoi Jul 24, 2024
7b1ee08
Merge branch 'main' of https://github.com/SangbumChoi/transformers in…
SangbumChoi Jul 24, 2024
8104521
fix typo
SangbumChoi Jul 24, 2024
5d6a088
fix common processor
SangbumChoi Jul 24, 2024
d5b13d2
remain
SangbumChoi Jul 24, 2024
889c4ed
Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
EduardoPach Jul 29, 2024
73942bc
add: cardinality loss and make box loss as copy from
EduardoPach Jul 29, 2024
12433e9
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Jul 29, 2024
7932111
change: default for reduction loss is sum
EduardoPach Jul 29, 2024
277b356
fix: vectorized generate fake box
EduardoPach Jul 29, 2024
7f9df29
fix copies
EduardoPach Jul 29, 2024
1cf9139
Revert "add molbap's commit"
SangbumChoi Jul 29, 2024
53dce5c
Merge branch 'main' of https://github.com/SangbumChoi/transformers in…
SangbumChoi Jul 29, 2024
86722b4
add unsync PR
SangbumChoi Jul 29, 2024
8baa8e0
revert
SangbumChoi Jul 29, 2024
39f28af
make CI happy
SangbumChoi Aug 5, 2024
59982bc
Addressed comments
EduardoPach Aug 5, 2024
7366aab
nit
SangbumChoi Aug 5, 2024
feca8d9
Merge branch 'fix-grounding-dino-loss' of https://github.com/EduardoP…
SangbumChoi Aug 6, 2024
09cec9b
Merge branch 'fix_grounding_dino' of https://github.com/SangbumChoi/t…
SangbumChoi Aug 6, 2024
92e2f84
tmp
SangbumChoi Aug 6, 2024
9ec8dd9
erase
SangbumChoi Aug 6, 2024
3c8dd45
fix typo
SangbumChoi Aug 6, 2024
baebefd
tmp
SangbumChoi Aug 7, 2024
bde2d12
tmp
SangbumChoi Aug 7, 2024
cc399c1
make style
SangbumChoi Aug 7, 2024
1b156b5
tmp
SangbumChoi Aug 7, 2024
ffce43c
currently obejct_detecion.py has OOM error
SangbumChoi Aug 8, 2024
b067cfd
pre-final
SangbumChoi Aug 8, 2024
9567b65
tmp
SangbumChoi Aug 8, 2024
94ce5c8
Merge branch 'main' of https://github.com/SangbumChoi/transformers in…
SangbumChoi Aug 8, 2024
c129338
pre-final
SangbumChoi Aug 9, 2024
30000e9
final
SangbumChoi Aug 9, 2024
a1e4e1c
nit
SangbumChoi Aug 11, 2024
bee0900
add self
SangbumChoi Aug 11, 2024
f0cb798
add tests
SangbumChoi Aug 11, 2024
a4fbc5b
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Aug 12, 2024
9735814
fix label
SangbumChoi Aug 16, 2024
0796916
addressed comments
EduardoPach Aug 21, 2024
03f6f3f
addressed one-hot
EduardoPach Aug 21, 2024
4ed4881
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Aug 22, 2024
c99a214
Merge branch 'fix-grounding-dino-loss' of https://github.com/EduardoP…
SangbumChoi Aug 23, 2024
4788402
Update tests/models/grounding_dino/test_modeling_grounding_dino.py
EduardoPach Aug 27, 2024
31a2c0f
Merge remote-tracking branch 'upstream/main' into fix-grounding-dino-…
EduardoPach Aug 27, 2024
9782de0
Addressed comments
EduardoPach Aug 27, 2024
ad190ac
fixed test
EduardoPach Aug 27, 2024
2fca079
remove items()
SangbumChoi Sep 11, 2024
df3659e
Merge branch 'fix-grounding-dino-loss' of https://github.com/EduardoP…
SangbumChoi Sep 26, 2024
1684d1a
enable metric
SangbumChoi Sep 26, 2024
db28574
Merge branch 'main' of https://github.com/SangbumChoi/transformers in…
SangbumChoi Sep 26, 2024
fc3c4ea
fix docs
SangbumChoi Sep 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Coming soon!
| [**`semantic-segmentation`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation) | [SCENE_PARSE_150](https://huggingface.co/datasets/scene_parse_150) | ✅ | ✅ |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb)
| [**`object-detection`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection) | [CPPE-5](https://huggingface.co/datasets/cppe-5) | ✅ | ✅ |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/pytorch/object_detection.ipynb)
| [**`instance-segmentation`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/instance-segmentation) | [ADE20K sample](https://huggingface.co/datasets/qubvel-hf/ade20k-mini) | ✅ | ✅ |✅ |

| [**`zero-shot`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/zero-shot) | [CPPE-5](https://huggingface.co/datasets/cppe-5) | ✅ | ✅ |✅ | /

## Running quick tests

Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/object-detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ python run_object_detection.py \
`--eval_do_concat_batches false` is required for correct evaluation of detection models;
`--ignore_mismatched_sizes true` is required to load detection model for finetuning with different number of classes.

The resulting model can be seen here: https://huggingface.co/qubvel-hf/qubvel-hf/detr-resnet-50-finetuned-10k-cppe5. The corresponding Weights and Biases report [here](https://api.wandb.ai/links/qubvel-hf-co/bnm0r5ex). Note that it's always advised to check the original paper to know the details regarding training hyperparameters. Hyperparameters for current example were not tuned. To improve model quality you could try:
The resulting model can be seen here: https://huggingface.co/qubvel-hf/detr-resnet-50-finetuned-10k-cppe5. The corresponding Weights and Biases report [here](https://api.wandb.ai/links/qubvel-hf-co/bnm0r5ex). Note that it's always advised to check the original paper to know the details regarding training hyperparameters. Hyperparameters for current example were not tuned. To improve model quality you could try:
- changing image size parameters (`--shortest_edge`/`--longest_edge`)
- changing training parameters, such as learning rate, batch size, warmup, optimizer and many more (see [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments))
- adding more image augmentations (we created a helpful [HF Space](https://huggingface.co/spaces/qubvel-hf/albumentations-demo) to choose some)
Expand Down
30 changes: 30 additions & 0 deletions examples/pytorch/test_pytorch_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
"semantic-segmentation",
"object-detection",
"instance-segmentation",
"zero-shot",
]
]
sys.path.extend(SRC_DIRS)
Expand All @@ -76,6 +77,7 @@
import run_swag
import run_translation
import run_wav2vec2_pretraining_no_trainer
import run_zero_shot_object_detection


logging.basicConfig(level=logging.DEBUG)
Expand Down Expand Up @@ -678,3 +680,31 @@ def test_run_instance_segmentation(self):
run_instance_segmentation.main()
result = get_results(tmp_dir)
self.assertGreaterEqual(result["test_map"], 0.1)

@patch.dict(os.environ, {"WANDB_DISABLED": "true"})
def test_zero_shotrun_object_detection(self):
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_zero_shot_object_detection.py
--model_name_or_path IDEA-Research/grounding-dino-tiny
--output_dir {tmp_dir}
--dataset_name qubvel-hf/cppe-5-sample
--do_train
--do_eval
--remove_unused_columns False
--overwrite_output_dir True
--eval_do_concat_batches False
--max_steps 10
--learning_rate=5e-5
--per_device_train_batch_size=1
--per_device_eval_batch_size=1
--seed 32
""".split()

if is_torch_fp16_available_on_device(torch_device):
testargs.append("--fp16")

with patch.object(sys, "argv", testargs):
run_zero_shot_object_detection.main()
result = get_results(tmp_dir)
self.assertGreaterEqual(result["test_map"], 0.01)
252 changes: 252 additions & 0 deletions examples/pytorch/zero-shot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,252 @@
<!---
Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Object detection examples

This directory contains 2 scripts that showcase how to fine-tune any model supported by the [`GroundingDinoForObjectDetection` API](https://huggingface.co/docs/transformers/main/en/model_doc/grounding-dino#transformers.GroundingDinoForObjectDetection) using PyTorch.

Content:
* [PyTorch version, Trainer](#pytorch-version-trainer)
* [PyTorch version, no Trainer](#pytorch-version-no-trainer)
* [Reload and perform inference](#reload-and-perform-inference)
* [Note on custom data](#note-on-custom-data)


## PyTorch version, Trainer

Based on the script [`run_zero_shot_object_detection.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/zero-shot/run_zero_shot_object_detection.py).

The script leverages the [🤗 Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer) to automatically take care of the training for you, running on distributed environments right away.

Here we show how to fine-tune a [GroundingDino](https://huggingface.co/IDEA-Research/grounding-dino-tiny) model on the [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset:

```bash
python run_zero_shot_object_detection.py \
--model_name_or_path IDEA-Research/grounding-dino-tiny \
--dataset_name cppe-5 \
--do_train true \
--do_eval true \
--output_dir grounding-dino-tiny-finetuned-cppe-5-10k-steps \
--num_train_epochs 10 \
--image_square_size 600 \
--fp16 true \
--learning_rate 5e-5 \
--weight_decay 1e-4 \
--dataloader_num_workers 4 \
--dataloader_prefetch_factor 2 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 \
--remove_unused_columns false \
--eval_do_concat_batches false \
--ignore_mismatched_sizes true \
--include_inputs_for_metrics true \
--metric_for_best_model eval_map \
--greater_is_better true \
--load_best_model_at_end true \
--logging_strategy epoch \
--evaluation_strategy epoch \
--save_strategy epoch \
--save_total_limit 2 \
--push_to_hub true \
--push_to_hub_model_id grounding-dino-tiny-finetuned-cppe-5-10k-steps \
--hub_strategy end \
--seed 1337
```

> Note:
`--eval_do_concat_batches false` is required for correct evaluation of detection models;
`--ignore_mismatched_sizes true` is required to load detection model for finetuning with different number of classes.

The resulting model can be seen here: https://huggingface.co/danelcsb/grounding-dino-tiny-finetuned-10k-cppe-5-10k-steps.. Note that it's always advised to check the original paper to know the details regarding training hyperparameters. Hyperparameters for current example were not tuned. To improve model quality you could try:
- changing freeze policy of image backbone and text backbone
- changing image size parameters (`--shortest_edge`/`--longest_edge`)
- changing training parameters, such as learning rate, batch size, warmup, optimizer and many more (see [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments))
- adding more image augmentations (we created a helpful [HF Space](https://huggingface.co/spaces/qubvel-hf/albumentations-demo) to choose some)

Note that you can replace the model and dataset by simply setting the `model_name_or_path` and `dataset_name` arguments respectively, with model or dataset from the [hub](https://huggingface.co/).
For dataset, make sure it provides labels in the same format as [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset and boxes are provided in [COCO format](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco).

Note that zero-shot inference output is not the same output format as object-detection output. In order to compute the evaluation metric performance such as mean average precision, we have to modify the output little bit.

| Train method | Batch size | freeze_text_backbone | freeze_backbone | precision | MSDA kernels | GPU Memory Usage (GB) | Time (s/epoch) |
|--------------|------------|----------------------|-----------------|-----------|--------------|-----------------------|----------------|
| trainer | 2 | Y | Y | fp16 | Y | 22.785 | 353 |
| trainer | 1 | Y | Y | fp32 | Y | 8.813 | 429 |
| no_trainer | 2 | N | N | fp32 | Y | OOM | - |
| no_trainer | 1 | N | N | fp32 | N | 20.441 | 724 |
| no_trainer | 1 | N | N | fp32 | Y | 11.243 | 473 |
| no_trainer | 1 | Y | Y | fp32 | Y | 11.539 | 386 |

Above table is tested on following device.
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35
- GPU type: NVIDIA TITAN RTX
- PyTorch version (GPU): 2.2.2

## PyTorch version, no Trainer

Based on the script [`run_zero_shot_object_detection_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/object-detection/run_zero_shot_object_detection.py).

The script leverages [🤗 `Accelerate`](https://github.com/huggingface/accelerate), which allows to write your own training loop in PyTorch, but have it run instantly on any (distributed) environment, including CPU, multi-CPU, GPU, multi-GPU and TPU. It also supports mixed precision. However, currently multi-GPU evaluation is not working due to following [issue](https://github.com/Lightning-AI/torchmetrics/issues/2477).

First, run:

```bash
accelerate config
```

and reply to the questions asked regarding the environment on which you'd like to train. Then

```bash
accelerate test
```

that will check everything is ready for training. Finally, you can launch training with

```bash
accelerate launch run_zero_shot_object_detection_no_trainer.py \
--model_name_or_path "IDEA-Research/grounding-dino-tiny" \
--dataset_name cppe-5 \
--output_dir "grounding-dino-tiny-finetuned-cppe-5-10k-steps-no-trainer" \
--num_train_epochs 10 \
--image_square_size 600 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--checkpointing_steps epoch \
--learning_rate 5e-5 \
--ignore_mismatched_sizes \
--with_tracking \
--push_to_hub \
--freeze_backbone \
--freeze_text_backbone
```

and boom, you're training, possibly on multiple GPUs, logging everything to all trackers found in your environment (like Weights and Biases, Tensorboard) and regularly pushing your model to the hub (with the repo name being equal to `args.output_dir` at your HF username) 🤗

With the default settings, the script fine-tunes a [GroundingDino](https://huggingface.co/IDEA-Research/grounding-dino-tiny) model on the [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset. The resulting model can be seen here: https://huggingface.co/danelcsb/grounding-dino-tiny-finetuned-10k-cppe-5-no-trainer.


## Reload and perform inference

This means that after training, you can easily load your trained model and perform inference as follows::

```python
import requests
import torch

from PIL import Image
from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection

# Name of repo on the hub or path to a local folder
model_name = "danelcsb/grounding-dino-tiny-finetuned-10k-cppe5"

image_processor = AutoProcessor.from_pretrained(model_name)
model = AutoModelForZeroShotObjectDetection.from_pretrained(model_name)

# Load image for inference
url = "https://images.pexels.com/photos/8413299/pexels-photo-8413299.jpeg?auto=compress&cs=tinysrgb&w=630&h=375&dpr=2"
image = Image.open(requests.get(url, stream=True).raw)
text = "Coverall. Face_Shield. Gloves. Goggles. Mask"

# Prepare image for the model
inputs = image_processor(images=image, text=text, return_tensors="pt")

with torch.no_grad():
outputs = model(**inputs)

# Post process model predictions
# this include conversion to Pascal VOC format and filtering non confident boxes
width, height = image.size
target_sizes = torch.tensor([height, width]).unsqueeze(0) # add batch dim
results = processor.post_process_grounded_object_detection(outputs, inputs.input_ids, box_threshold=0.15, text_threshold=0.1, target_sizes=target_sizes)[0]

for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```

And visualize with the following code:
```python
from PIL import ImageDraw
draw = ImageDraw.Draw(image)

for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
x, y, x2, y2 = tuple(box)
draw.rectangle((x, y, x2, y2), outline="red", width=1)
draw.text((x, y), model.config.id2label[label.item()], fill="white")

image
```


## Note on custom data

In case you'd like to use the script with custom data, you could prepare your data with the following way:

```bash
custom_dataset/
└── train
├── 0001.jpg
├── 0002.jpg
├── ...
└── metadata.jsonl
└── validation
└── ...
└── test
└── ...
```

Where `metadata.jsonl` is a file with the following structure:
```json
{"file_name": "0001.jpg", "objects": {"bbox": [[302.0, 109.0, 73.0, 52.0]], "categories": [0], "id": [1], "area": [50.0]}}
{"file_name": "0002.jpg", "objects": {"bbox": [[810.0, 100.0, 57.0, 28.0]], "categories": [1], "id": [2], "area": [40.0]}}
...
```
Trining script support bounding boxes in COCO format (x_min, y_min, width, height).

Then, you cat load the dataset with just a few lines of code:

```python
from datasets import load_dataset

# Load dataset
dataset = load_dataset("imagefolder", data_dir="custom_dataset/")

# >>> DatasetDict({
# ... train: Dataset({
# ... features: ['image', 'objects'],
# ... num_rows: 2
# ... })
# ... })

# Push to hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
dataset.push_to_hub("name of repo on the hub")

# optionally, you can push to a private repo on the hub
# dataset.push_to_hub("name of repo on the hub", private=True)
```

And the final step, for training you should provide id2label mapping in the following way:
```python
id2label = {0: "Car", 1: "Bird", ...}
```
Just find it in code and replace for simplicity, or save `json` locally and with the dataset on the hub!

See also: [Dataset Creation Guide](https://huggingface.co/docs/datasets/image_dataset#create-an-image-dataset)
5 changes: 5 additions & 0 deletions examples/pytorch/zero-shot/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
albumentations >= 1.4.5
timm
datasets
torchmetrics
pycocotools
Loading