Skip to content

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 7, 2025

Bumps the pip group with 1 update in the /deepseek-prover-v2-671b directory: vllm.
Bumps the pip group with 1 update in the /deepseek-r1-0528-qwen3-8b directory: vllm.
Bumps the pip group with 1 update in the /deepseek-r1-671b directory: vllm.
Bumps the pip group with 1 update in the /deepseek-r1-llama3.3-70b directory: vllm.
Bumps the pip group with 1 update in the /deepseek-v2-lite-pd directory: vllm.
Bumps the pip group with 1 update in the /deepseek-v3-671b directory: vllm.
Bumps the pip group with 1 update in the /deepseek-v3.1-671b directory: vllm.
Bumps the pip group with 1 update in the /devstral-small-2505 directory: vllm.
Bumps the pip group with 1 update in the /gemma3-4b-instruct directory: vllm.
Bumps the pip group with 1 update in the /gpt-oss-120b directory: vllm.
Bumps the pip group with 1 update in the /gpt-oss-20b directory: vllm.
Bumps the pip group with 1 update in the /jamba1.6-large directory: vllm.
Bumps the pip group with 1 update in the /jamba1.6-mini directory: vllm.
Bumps the pip group with 1 update in the /llama3.1-8b-instruct directory: vllm.
Bumps the pip group with 1 update in the /llama3.1-8b-kv-offloading directory: vllm.
Bumps the pip group with 1 update in the /llama3.2-3b-instruct directory: vllm.
Bumps the pip group with 1 update in the /llama3.3-70b-instruct directory: vllm.
Bumps the pip group with 1 update in the /llama4-17b-maverick-instruct directory: vllm.
Bumps the pip group with 1 update in the /llama4-17b-scout-instruct directory: vllm.
Bumps the pip group with 1 update in the /magistral-small-2506 directory: vllm.
Bumps the pip group with 1 update in the /mistral-small-3.1-24b-instruct-2503 directory: vllm.
Bumps the pip group with 1 update in the /phi4-14b-reasoning directory: vllm.
Bumps the pip group with 1 update in the /phi4-14b-reasoning-plus directory: vllm.

Updates vllm from 0.10.2 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (#25101).
  • Async scheduling: Uniprocessor executor support (#24219).
  • Architecture: Tokenizer group removal (#24078), shared memory multimodal caching (#20452).
  • Attention: Hybrid SSM/Attention in Triton (#21197), FlashAttention 3 for ViT (#24347).
  • Performance: FlashInfer RoPE 2x speedup (#21126), fused Q/K RoPE 11% improvement (#24511, #25005), 8x spec decode overhead reduction (#24986), FlashInfer spec decode with 1.14x speedup (#25196), model info caching (#23558), inputs_embeds copy avoidance (#25739).
  • LoRA: Optimized weight loading (#25403).
  • Defaults: CUDA graph mode FULL_AND_PIECEWISE (#25444), Inductor standalone compile disabled (#25391).
  • torch.compile: CUDA graph Inductor partition integration (#24281).

Hardware & Performance

  • NVIDIA: FP8 FlashInfer MLA decode (#24705), BF16 fused MoE for Hopper/Blackwell expert parallel (#25503).
  • DeepGEMM: Enabled by default (#24462), 5.5% throughput improvement (#24783).
  • New architectures: RISC-V 64-bit (#22112), ARM non-x86 CPU (#25166), ARM 4-bit fused MoE (#23809).
  • AMD: ROCm 7.0 (#25178), GLM-4.5 MI300X tuning (#25703).
  • Intel XPU: MoE DP accuracy fix (#25465).

Large Scale Serving & Performance

  • Dual-Batch Overlap (DBO): Overlapping computation mechanism (#23693), DeepEP high throughput + prefill (#24845).
  • Data Parallelism: torchrun launcher (#24899), Ray placement groups (#25026), Triton DP/EP kernels (#24588).
  • EPLB: Hunyuan V1 (#23078), Mixtral (#22842), static placement (#23745), reduced overhead (#24573).
  • Disaggregated serving: KV transfer metrics (#22188), NIXL MLA latent dimension (#25902).
  • MoE: Shared expert overlap optimization (#24254), SiLU kernel for DeepSeek-R1 (#24054), Enable Allgather/ReduceScatter backend for NaiveAllToAll (#23964).
  • Distributed: NCCL symmetric memory with 3-4% throughput improvement (#24532), enabled by default for TP (#25070).

Quantization

  • FP8: Per-token-group quantization (#24342), hardware-accelerated instructions (#24757), torch.compile KV cache (#22758), paged attention update (#22222).
  • FP4: NVFP4 for dense models (#25609), Gemma3 (#22771), Llama 3.1 405B (#25135).
  • W4A8: Faster preprocessing (#23972).
  • Compressed tensors: Blocked FP8 for MoE (#25219).

... (truncated)

Commits
  • f71952c [Build/CI] Revert back to Ubuntu 20.04, install python 3.12 with uv (#26103)
  • d100776 [Bugfix] Disable cascade attention with FlashInfer (#26130)
  • c75c2e7 [Deepseek v3.2] Support indexer prefill chunking (#25999)
  • 9d9a2b7 [Small] Prevent bypassing media domain restriction via HTTP redirects (#26035)
  • 6040e0b [BugFix] Fix FI accuracy issue when used for MLA prefill (#26063)
  • 05bf0c5 Update base image to 22.04 (jammy) (#26065)
  • c536881 [BugFix] ChunkedLocalAttention is currently not CG compatible (#26034)
  • ebce361 [BugFix][DP/EP] Fix CUTLASS MLA hang under load (#26026)
  • e4beabd [BugFix] Fix default kv-cache-dtype default for DeepseekV3.2 (#25988)
  • febb688 [Bugfix] Fix __syncwarp on ROCM (#25996)
  • Additional commits viewable in compare view

Updates vllm from 0.10.2 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (#25101).
  • Async scheduling: Uniprocessor executor support (#24219).
  • Architecture: Tokenizer group removal (#24078), shared memory multimodal caching (#20452).
  • Attention: Hybrid SSM/Attention in Triton (#21197), FlashAttention 3 for ViT (#24347).
  • Performance: FlashInfer RoPE 2x speedup (#21126), fused Q/K RoPE 11% improvement (#24511, #25005), 8x spec decode overhead reduction (#24986), FlashInfer spec decode with 1.14x speedup (#25196), model info caching (#23558), inputs_embeds copy avoidance (#25739).
  • LoRA: Optimized weight loading (#25403).
  • Defaults: CUDA graph mode FULL_AND_PIECEWISE (#25444), Inductor standalone compile disabled (#25391).
  • torch.compile: CUDA graph Inductor partition integration (#24281).

Hardware & Performance

  • NVIDIA: FP8 FlashInfer MLA decode (#24705), BF16 fused MoE for Hopper/Blackwell expert parallel (#25503).
  • DeepGEMM: Enabled by default (#24462), 5.5% throughput improvement (#24783).
  • New architectures: RISC-V 64-bit (#22112), ARM non-x86 CPU (#25166), ARM 4-bit fused MoE (#23809).
  • AMD: ROCm 7.0 (#25178), GLM-4.5 MI300X tuning (#25703).
  • Intel XPU: MoE DP accuracy fix (#25465).

Large Scale Serving & Performance

  • Dual-Batch Overlap (DBO): Overlapping computation mechanism (#23693), DeepEP high throughput + prefill (#24845).
  • Data Parallelism: torchrun launcher (#24899), Ray placement groups (#25026), Triton DP/EP kernels (#24588).
  • EPLB: Hunyuan V1 (#23078), Mixtral (#22842), static placement (#23745), reduced overhead (#24573).
  • Disaggregated serving: KV transfer metrics (#22188), NIXL MLA latent dimension (#25902).
  • MoE: Shared expert overlap optimization (#24254), SiLU kernel for DeepSeek-R1 (#24054), Enable Allgather/ReduceScatter backend for NaiveAllToAll (#23964).
  • Distributed: NCCL symmetric memory with 3-4% throughput improvement (#24532), enabled by default for TP (#25070).

Quantization

  • FP8: Per-token-group quantization (#24342), hardware-accelerated instructions (#24757), torch.compile KV cache (#22758), paged attention update (#22222).
  • FP4: NVFP4 for dense models (#25609), Gemma3 (#22771), Llama 3.1 405B (#25135).
  • W4A8: Faster preprocessing (#23972).
  • Compressed tensors: Blocked FP8 for MoE (#25219).

... (truncated)

Commits
  • f71952c [Build/CI] Revert back to Ubuntu 20.04, install python 3.12 with uv (#26103)
  • d100776 [Bugfix] Disable cascade attention with FlashInfer (#26130)
  • c75c2e7 [Deepseek v3.2] Support indexer prefill chunking (#25999)
  • 9d9a2b7 [Small] Prevent bypassing media domain restriction via HTTP redirects (#26035)
  • 6040e0b [BugFix] Fix FI accuracy issue when used for MLA prefill (#26063)
  • 05bf0c5 Update base image to 22.04 (jammy) (#26065)
  • c536881 [BugFix] ChunkedLocalAttention is currently not CG compatible (#26034)
  • ebce361 [BugFix][DP/EP] Fix CUTLASS MLA hang under load (#26026)
  • e4beabd [BugFix] Fix default kv-cache-dtype default for DeepseekV3.2 (#25988)
  • febb688 [Bugfix] Fix __syncwarp on ROCM (#25996)
  • Additional commits viewable in compare view

Updates vllm from 0.10.2 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (#25101).
  • Async scheduling: Uniprocessor executor support (#24219).
  • Architecture: Tokenizer group removal (#24078), shared memory multimodal caching (#20452).
  • Attention: Hybrid SSM/Attention in Triton (#21197), FlashAttention 3 for ViT (#24347).
  • Performance: FlashInfer RoPE 2x speedup (#21126), fused Q/K RoPE 11% improvement (#24511, #25005), 8x spec decode overhead reduction (#24986), FlashInfer spec decode with 1.14x speedup (#25196), model info caching (#23558), inputs_embeds copy avoidance (#25739).
  • LoRA: Optimized weight loading (#25403).
  • Defaults: CUDA graph mode FULL_AND_PIECEWISE (#25444), Inductor standalone compile disabled (#25391).
  • torch.compile: CUDA graph Inductor partition integration (#24281).

Hardware & Performance

  • NVIDIA: FP8 FlashInfer MLA decode (#24705), BF16 fused MoE for Hopper/Blackwell expert parallel (#25503).
  • DeepGEMM: Enabled by default (#24462), 5.5% throughput improvement (#24783).
  • New architectures: RISC-V 64-bit (#22112), ARM non-x86 CPU (#25166), ARM 4-bit fused MoE (#23809).
  • AMD: ROCm 7.0 (#25178), GLM-4.5 MI300X tuning (#25703).
  • Intel XPU: MoE DP accuracy fix (#25465).

Large Scale Serving & Performance

  • Dual-Batch Overlap (DBO): Overlapping computation mechanism (#23693), DeepEP high throughput + prefill (#24845).
  • Data Parallelism: torchrun launcher (#24899), Ray placement groups (#25026), Triton DP/EP kernels (#24588).
  • EPLB: Hunyuan V1 (#23078), Mixtral (#22842), static placement (#23745), reduced overhead (#24573).
  • Disaggregated serving: KV transfer metrics (#22188), NIXL MLA latent dimension (#25902).
  • MoE: Shared expert overlap optimization (#24254), SiLU kernel for DeepSeek-R1 (#24054), Enable Allgather/ReduceScatter backend for NaiveAllToAll (#23964).
  • Distributed: NCCL symmetric memory with 3-4% throughput improvement (#24532), enabled by default for TP (#25070).

Quantization

  • FP8: Per-token-group quantization (#24342), hardware-accelerated instructions (#24757), torch.compile KV cache (#22758), paged attention update (#22222).
  • FP4: NVFP4 for dense models (#25609), Gemma3 (#22771), Llama 3.1 405B (#25135).
  • W4A8: Faster preprocessing (#23972).
  • Compressed tensors: Blocked FP8 for MoE (#25219).

... (truncated)

Commits
  • f71952c [Build/CI] Revert back to Ubuntu 20.04, install python 3.12 with uv (#26103)
  • d100776 [Bugfix] Disable cascade attention with FlashInfer (#26130)
  • c75c2e7 [Deepseek v3.2] Support indexer prefill chunking (#25999)
  • 9d9a2b7 [Small] Prevent bypassing media domain restriction via HTTP redirects (#26035)
  • 6040e0b [BugFix] Fix FI accuracy issue when used for MLA prefill (#26063)
  • 05bf0c5 Update base image to 22.04 (jammy) (#26065)
  • c536881 [BugFix] ChunkedLocalAttention is currently not CG compatible (#26034)
  • ebce361 [BugFix][DP/EP] Fix CUTLASS MLA hang under load (#26026)
  • e4beabd [BugFix] Fix default kv-cache-dtype default for DeepseekV3.2 (#25988)
  • febb688 [Bugfix] Fix __syncwarp on ROCM (#25996)
  • Additional commits viewable in compare view

Updates vllm from 0.10.2 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (#25101).
  • Async scheduling: Uniprocessor executor support (#24219).
  • Architecture: Tokenizer group removal (#24078), shared memory multimodal caching (#20452).
  • Attention: Hybrid SSM/Attention in Triton (#21197), FlashAttention 3 for ViT (#24347).
  • Performance: FlashInfer RoPE 2x speedup (#21126), fused Q/K RoPE 11% improvement (#24511, #25005), 8x spec decode overhead reduction (#24986), FlashInfer spec decode with 1.14x speedup (#25196), model info caching (#23558), inputs_embeds copy avoidance (#25739).
  • LoRA: Optimized weight loading (#25403).
  • Defaults: CUDA graph mode FULL_AND_PIECEWISE (#25444), Inductor standalone compile disabled (#25391).
  • torch.compile: CUDA graph Inductor partition integration (#24281).

Hardware & Performance

  • NVIDIA: FP8 FlashInfer MLA decode (#24705), BF16 fused MoE for Hopper/Blackwell expert parallel (#25503).
  • DeepGEMM: Enabled by default (#24462), 5.5% throughput improvement (#24783).
  • New architectures: RISC-V 64-bit (#22112), ARM non-x86 CPU (#25166), ARM 4-bit fused MoE (#23809).
  • AMD: ROCm 7.0 (#25178), GLM-4.5 MI300X tuning (#25703).
  • Intel XPU: MoE DP accuracy fix (#25465).

Large Scale Serving & Performance

  • Dual-Batch Overlap (DBO): Overlapping computation mechanism (#23693), DeepEP high throughput + prefill (#24845).
  • Data Parallelism: torchrun launcher (#24899), Ray placement groups (#25026), Triton DP/EP kernels (#24588).
  • EPLB: Hunyuan V1 (#23078), Mixtral (#22842), static placement (#23745), reduced overhead (#24573).
  • Disaggregated serving: KV transfer metrics (#22188), NIXL MLA latent dimension (#25902).
  • MoE: Shared expert overlap optimization (#24254), SiLU kernel for DeepSeek-R1 (#24054), Enable Allgather/ReduceScatter backend for NaiveAllToAll (#23964).
  • Distributed: NCCL symmetric memory with 3-4% throughput improvement (#24532), enabled by default for TP (#25070).

Quantization

  • FP8: Per-token-group quantization (#24342), hardware-accelerated instructions (#24757), torch.compile KV cache (#22758), paged attention update (#22222).
  • FP4: NVFP4 for dense models (#25609), Gemma3 (#22771), Llama 3.1 405B (#25135).
  • W4A8: Faster preprocessing (#23972).
  • Compressed tensors: Blocked FP8 for MoE (#25219).

... (truncated)

Commits
  • f71952c [Build/CI] Revert back to Ubuntu 20.04, install python 3.12 with uv (#26103)
  • d100776 [Bugfix] Disable cascade attention with FlashInfer (#26130)
  • c75c2e7 [Deepseek v3.2] Support indexer prefill chunking (#25999)
  • 9d9a2b7 [Small] Prevent bypassing media domain restriction via HTTP redirects (#26035)
  • 6040e0b [BugFix] Fix FI accuracy issue when used for MLA prefill (#26063)
  • 05bf0c5 Update base image to 22.04 (jammy) (#26065)
  • c536881 [BugFix] ChunkedLocalAttention is currently not CG compatible (#26034)
  • ebce361 [BugFix][DP/EP] Fix CUTLASS MLA hang under load (#26026)
  • e4beabd [BugFix] Fix default kv-cache-dtype default for DeepseekV3.2 (#25988)
  • febb688 [Bugfix] Fix __syncwarp on ROCM (#25996)
  • Additional commits viewable in compare view

Updates vllm from 0.10.1.1 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (

Bumps the pip group with 1 update in the /deepseek-prover-v2-671b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-r1-0528-qwen3-8b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-r1-671b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-r1-llama3.3-70b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-v2-lite-pd directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-v3-671b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /deepseek-v3.1-671b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /devstral-small-2505 directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /gemma3-4b-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /gpt-oss-120b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /gpt-oss-20b directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /jamba1.6-large directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /jamba1.6-mini directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama3.1-8b-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama3.1-8b-kv-offloading directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama3.2-3b-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama3.3-70b-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama4-17b-maverick-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /llama4-17b-scout-instruct directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /magistral-small-2506 directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /mistral-small-3.1-24b-instruct-2503 directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /phi4-14b-reasoning directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /phi4-14b-reasoning-plus directory: [vllm](https://github.com/vllm-project/vllm).


Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.1.1 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

Updates `vllm` from 0.10.2 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.2...v0.11.0)

---
updated-dependencies:
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update python code labels Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants