Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DINOv2 with registers #35348

Merged
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
fc8324a
added changes from 32905
BernardZach Dec 6, 2024
0ed1114
fixed mistakes caused by select all paste
BernardZach Dec 6, 2024
64fd5e1
Merge branch 'main' of https://github.com/huggingface/transformers in…
BernardZach Dec 9, 2024
cbfa985
rename diff_dinov2...
BernardZach Dec 9, 2024
125197b
ran tests
BernardZach Dec 12, 2024
5a9256e
Merge pull request #1 from innovationcore/zach/Dino-v2-with-registers
BernardZach Dec 12, 2024
39a573a
Fix modular
NielsRogge Dec 19, 2024
f6338f2
Fix tests
NielsRogge Dec 19, 2024
8f327b6
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 19, 2024
b93fc8f
Use new init
NielsRogge Dec 20, 2024
87263fa
Simplify drop path
NielsRogge Dec 20, 2024
b9eccbf
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 20, 2024
d85a9c6
Convert all checkpoints
NielsRogge Dec 22, 2024
2c072b4
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 22, 2024
aac007b
Add figure and summary
NielsRogge Dec 23, 2024
e5c4dd2
Merge branch 'main' into add_dinov_2_registers_innovationcore
NielsRogge Dec 23, 2024
13b3235
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 23, 2024
8b13023
Update paths
NielsRogge Dec 24, 2024
7ea3747
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 24, 2024
19af3f2
Update docs
NielsRogge Dec 24, 2024
8a129f3
Update docs
NielsRogge Dec 24, 2024
e72e7f8
Update toctree
NielsRogge Dec 24, 2024
b0dd519
Update docs
NielsRogge Dec 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -651,6 +651,8 @@
title: DiNAT
- local: model_doc/dinov2
title: DINOV2
- local: model_doc/dinov2_with_registers
title: Dinov2WithRegisters
- local: model_doc/dit
title: DiT
- local: model_doc/dpt
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Flax), PyTorch, and/or TensorFlow.
| [DialoGPT](model_doc/dialogpt) | ✅ | ✅ | ✅ |
| [DiNAT](model_doc/dinat) | ✅ | ❌ | ❌ |
| [DINOv2](model_doc/dinov2) | ✅ | ❌ | ✅ |
| [Dinov2WithRegisters](model_doc/dinov2_with_registers) | ✅ | ❌ | ❌ |
| [DistilBERT](model_doc/distilbert) | ✅ | ✅ | ✅ |
| [DiT](model_doc/dit) | ✅ | ❌ | ✅ |
| [DonutSwin](model_doc/donut) | ✅ | ❌ | ❌ |
Expand Down
42 changes: 42 additions & 0 deletions docs/source/en/model_doc/dinov2_with_registers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Dinov2WithRegisters

## Overview

The Dinov2 With Registers model was proposed in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski.

This paper shows that by adding more tokens to the input sequence of a Vision Transformer useful for internal computations, one can enhance the performance.

The abstract from the paper is the following:

*Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.*

Tips:

- Usage of Dinov2 with registers is identical to Dinov2 without, you'll just get better performance.

This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/facebookresearch/dinov2).


## Dinov2WithRegistersConfig

[[autodoc]] Dinov2WithRegistersConfig

## Dinov2WithRegistersModel

[[autodoc]] Dinov2WithRegistersModel
- forward

## Dinov2WithRegistersForImageClassification

[[autodoc]] Dinov2WithRegistersForImageClassification
- forward
1 change: 1 addition & 0 deletions docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DeiT](https://huggingface.co/docs/transformers/model_doc/deit#transformers.DeiTModel)
* [Dinov2](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [Dinov2_with_registers](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [DistilBert](https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel)
* [Dpr](https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader)
* [EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder_decoder#transformers.EncoderDecoderModel)
Expand Down
16 changes: 16 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -403,6 +403,7 @@
"models.dialogpt": [],
"models.dinat": ["DinatConfig"],
"models.dinov2": ["Dinov2Config"],
"models.dinov2_with_registers": ["Dinov2WithRegistersConfig"],
"models.distilbert": [
"DistilBertConfig",
"DistilBertTokenizer",
Expand Down Expand Up @@ -2157,6 +2158,14 @@
"Dinov2PreTrainedModel",
]
)
_import_structure["models.dinov2_with_registers"].extend(
[
"Dinov2WithRegistersBackbone",
"Dinov2WithRegistersForImageClassification",
"Dinov2WithRegistersModel",
"Dinov2WithRegistersPreTrainedModel",
]
)
_import_structure["models.distilbert"].extend(
[
"DistilBertForMaskedLM",
Expand Down Expand Up @@ -5352,6 +5361,7 @@
from .models.detr import DetrConfig
from .models.dinat import DinatConfig
from .models.dinov2 import Dinov2Config
from .models.dinov2_with_registers import Dinov2WithRegistersConfig
from .models.distilbert import (
DistilBertConfig,
DistilBertTokenizer,
Expand Down Expand Up @@ -7007,6 +7017,12 @@
Dinov2Model,
Dinov2PreTrainedModel,
)
from .models.dinov2_with_registers import (
Dinov2WithRegistersBackbone,
Dinov2WithRegistersForImageClassification,
Dinov2WithRegistersModel,
Dinov2WithRegistersPreTrainedModel,
)
from .models.distilbert import (
DistilBertForMaskedLM,
DistilBertForMultipleChoice,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@
dialogpt,
dinat,
dinov2,
dinov2_with_registers,
distilbert,
dit,
donut,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@
("detr", "DetrConfig"),
("dinat", "DinatConfig"),
("dinov2", "Dinov2Config"),
("dinov2_with_registers", "Dinov2WithRegistersConfig"),
("distilbert", "DistilBertConfig"),
("donut-swin", "DonutSwinConfig"),
("dpr", "DPRConfig"),
Expand Down Expand Up @@ -404,6 +405,7 @@
("dialogpt", "DialoGPT"),
("dinat", "DiNAT"),
("dinov2", "DINOv2"),
("dinov2_with_registers", "Dinov2WithRegisters"),
("distilbert", "DistilBERT"),
("dit", "DiT"),
("donut-swin", "DonutSwin"),
Expand Down
4 changes: 4 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@
("detr", "DetrModel"),
("dinat", "DinatModel"),
("dinov2", "Dinov2Model"),
("dinov2_with_registers", "Dinov2WithRegistersModel"),
("distilbert", "DistilBertModel"),
("donut-swin", "DonutSwinModel"),
("dpr", "DPRQuestionEncoder"),
Expand Down Expand Up @@ -584,6 +585,7 @@
("detr", "DetrModel"),
("dinat", "DinatModel"),
("dinov2", "Dinov2Model"),
("dinov2_with_registers", "Dinov2WithRegistersModel"),
("dpt", "DPTModel"),
("efficientformer", "EfficientFormerModel"),
("efficientnet", "EfficientNetModel"),
Expand Down Expand Up @@ -659,6 +661,7 @@
),
("dinat", "DinatForImageClassification"),
("dinov2", "Dinov2ForImageClassification"),
("dinov2_with_registers", "Dinov2WithRegistersForImageClassification"),
(
"efficientformer",
(
Expand Down Expand Up @@ -1373,6 +1376,7 @@
("convnextv2", "ConvNextV2Backbone"),
("dinat", "DinatBackbone"),
("dinov2", "Dinov2Backbone"),
("dinov2_with_registers", "Dinov2WithRegistersBackbone"),
("focalnet", "FocalNetBackbone"),
("hiera", "HieraBackbone"),
("maskformer-swin", "MaskFormerSwinBackbone"),
Expand Down
42 changes: 19 additions & 23 deletions src/transformers/models/dinov2/modeling_dinov2.py
Original file line number Diff line number Diff line change
Expand Up @@ -347,37 +347,33 @@ def forward(self, hidden_state: torch.Tensor) -> torch.Tensor:
return hidden_state * self.lambda1


# Copied from transformers.models.beit.modeling_beit.drop_path
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the
layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the
argument.
"""
if drop_prob == 0.0 or not training:
return input
keep_prob = 1 - drop_prob
shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device)
random_tensor.floor_() # binarize
output = input.div(keep_prob) * random_tensor
return output


# Copied from transformers.models.beit.modeling_beit.BeitDropPath
class Dinov2DropPath(nn.Module):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""

def __init__(self, drop_prob: Optional[float] = None) -> None:
super().__init__()
self.drop_prob = drop_prob

def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
NielsRogge marked this conversation as resolved.
Show resolved Hide resolved
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the
layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the
argument.
"""
if drop_prob == 0.0 or not training:
return input
keep_prob = 1 - drop_prob
shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device)
random_tensor.floor_() # binarize
output = input.div(keep_prob) * random_tensor
return output

def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
return drop_path(hidden_states, self.drop_prob, self.training)
return self.drop_path(hidden_states, self.drop_prob, self.training)

def extra_repr(self) -> str:
return "p={}".format(self.drop_prob)
Expand Down
57 changes: 57 additions & 0 deletions src/transformers/models/dinov2_with_registers/__init__.py
NielsRogge marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING

from ...utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_torch_available,
)


_import_structure = {"configuration_dinov2_with_registers": ["Dinov2WithRegistersConfig"]}

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_dinov2_with_registers"] = [
"Dinov2WithRegistersForImageClassification",
"Dinov2WithRegistersModel",
"Dinov2WithRegistersPreTrainedModel",
"Dinov2WithRegistersBackbone",
]

if TYPE_CHECKING:
from .configuration_dinov2_with_registers import Dinov2WithRegistersConfig

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_dinov2_with_registers import (
Dinov2WithRegistersBackbone,
Dinov2WithRegistersForImageClassification,
Dinov2WithRegistersModel,
Dinov2WithRegistersPreTrainedModel,
)

else:
import sys

sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
Loading