Skip to content

Actions: vllm-project/vllm

codespell

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
3,299 workflow runs
3,299 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Tpu profiles
codespell #3255: Pull request #11041 synchronize by robertgshaw2-neuralmagic
December 10, 2024 23:37 22s neuralmagic:tpu-profiles
December 10, 2024 23:37 22s
[V1] Use input_ids as input for text-only models
codespell #3252: Pull request #11032 synchronize by WoosukKwon
December 10, 2024 22:56 20s v1-embedding-cg
December 10, 2024 22:56 20s
[V1][Bugfix] Always set enable_chunked_prefill = True for V1 (#11061)
codespell #3251: Commit 134810b pushed by WoosukKwon
December 10, 2024 22:41 26s main
December 10, 2024 22:41 26s
[Core] V1: Use multiprocessing by default
codespell #3250: Pull request #11074 synchronize by russellb
December 10, 2024 22:27 26s russellb:v1-multiproc-by-default
December 10, 2024 22:27 26s
[Core] V1: Use multiprocessing by default
codespell #3249: Pull request #11074 synchronize by russellb
December 10, 2024 22:16 23s russellb:v1-multiproc-by-default
December 10, 2024 22:16 23s
[Misc] add w8a8 asym models
codespell #3248: Pull request #11075 opened by dsikka
December 10, 2024 22:13 27s neuralmagic:update_w8a8_tests
December 10, 2024 22:13 27s
[Bugfix]: Clamp -inf logprob values in prompt_logprobs
codespell #3245: Pull request #11073 synchronize by rafvasq
December 10, 2024 21:47 22s rafvasq:investigate-top-k
December 10, 2024 21:47 22s
[Bugfix]: Clamp -inf logprob values in prompt_logprobs
codespell #3244: Pull request #11073 synchronize by rafvasq
December 10, 2024 21:46 25s rafvasq:investigate-top-k
December 10, 2024 21:46 25s
[Bugfix] Fix Mamba multistep
codespell #3242: Pull request #11071 opened by tlrmchlsmth
December 10, 2024 21:18 22s neuralmagic:fix_mamba_multistep
December 10, 2024 21:18 22s
[torch.compile] add a flag to track batchsize statistics (#11059)
codespell #3241: Commit 75f89dc pushed by youkaichao
December 10, 2024 20:40 21s main
December 10, 2024 20:40 21s
[Core] Update to outlines >= 0.1.8 (#10576)
codespell #3238: Commit e739194 pushed by youkaichao
December 10, 2024 20:08 21s main
December 10, 2024 20:08 21s
[V1] VLM preprocessor hashing
codespell #3237: Pull request #11020 synchronize by alexm-neuralmagic
December 10, 2024 18:38 22s v1_vlm_hash_mapper
December 10, 2024 18:38 22s
[V1] VLM preprocessor hashing
codespell #3236: Pull request #11020 synchronize by alexm-neuralmagic
December 10, 2024 18:27 25s v1_vlm_hash_mapper
December 10, 2024 18:27 25s
[V1][Bugfix] Always set enable_chunked_prefill = True for V1
codespell #3235: Pull request #11061 synchronize by WoosukKwon
December 10, 2024 18:14 1m 39s v1-chunked-prefill-flag
December 10, 2024 18:14 1m 39s
[Model] PP support for Mamba-like models
codespell #3233: Pull request #10992 synchronize by mzusman
December 10, 2024 18:04 22s mzusman:mamba_jamba_pp
December 10, 2024 18:04 22s
[Core] Support offloading KV cache to CPU
codespell #3232: Pull request #10874 synchronize by ApostaC
December 10, 2024 18:04 21s KuntaiDu:yihua-cpu-offloading2
December 10, 2024 18:04 21s