Skip to content

Commit

Permalink
Update resources
Browse files Browse the repository at this point in the history
  • Loading branch information
NielsRogge committed Aug 15, 2024
1 parent 8820fe8 commit a17ab37
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/depth_anything.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ If you want to do the pre- and postprocessing yourself, here's how to do that:

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.

- [Monocular depth estimation task guide](../tasks/depth_estimation)
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
- A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎

If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Expand Down
19 changes: 16 additions & 3 deletions docs/source/en/model_doc/depth_anything_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ The abstract from the paper is the following:

*This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research.*

<Tip>

- Both relative and absolute depth estimation checkpoints can be found on the hub. The relative models are [here](https://huggingface.co/models?library=transformers&other=relative+depth&sort=trending) and the absolute models are [here](https://huggingface.co/models?library=transformers&other=absolute+depth&sort=trending).

</Tip>

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>

Expand All @@ -46,7 +52,10 @@ The pipeline allows to use the model in a few lines of code:
>>> import requests

>>> # load pipe
>>> # use this for relative depth
>>> pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Small-hf")
>>> # use this for absolute depth
>>> # pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf")

>>> # load image
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
Expand All @@ -70,8 +79,12 @@ If you want to do the pre- and post-processing yourself, here's how to do that:
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
>>> model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
>>> # use this for relative depth
>>> model_id = "depth-anything/Depth-Anything-V2-Small-hf"
>>> # use this for absolute depth
>>> # model_id = "depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf"
>>> image_processor = AutoImageProcessor.from_pretrained(model_id)
>>> model = AutoModelForDepthEstimation.from_pretrained(model_id)

>>> # prepare image for the model
>>> inputs = image_processor(images=image, return_tensors="pt")
Expand All @@ -98,7 +111,7 @@ If you want to do the pre- and post-processing yourself, here's how to do that:

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.

- [Monocular depth estimation task guide](../tasks/depth_estimation)
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
- [Depth Anything V2 demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2).
- A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎
- [Core ML conversion of the `small` variant for use on Apple Silicon](https://huggingface.co/apple/coreml-depth-anything-v2-small).
Expand Down
5 changes: 2 additions & 3 deletions docs/source/en/model_doc/dpt.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,9 @@ model = DPTForDepthEstimation(config=config)

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.

- Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT).

- [Semantic segmentation task guide](../tasks/semantic_segmentation)
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
- [Semantic segmentation task guide](../tasks/semantic_segmentation)
- Demo notebooks for [`DPTForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT). 🌎

If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/model_doc/llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,12 +106,12 @@ Flash Attention 2 is an even faster, optimized version of the previous optimizat

## Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaVa.

<PipelineTag pipeline="image-to-text"/>

- A [Google Colab demo](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing) on how to run Llava on a free-tier Google colab instance leveraging 4-bit inference.
- A [similar notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LLaVa/Inference_with_LLaVa_for_multimodal_generation.ipynb) showcasing batched inference. 🌎
- Demo notebooks regarding fine-tuning on custom datasets and batched inference can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LLaVa/). 🌎


## LlavaConfig
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/zoedepth.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ depth = Image.fromarray(formatted)

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ZoeDepth.

- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
- A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth). 🌎

## ZoeDepthConfig
Expand Down

0 comments on commit a17ab37

Please sign in to comment.