| Source | ID | Title |
|---|---|---|
Github GHSA |
GHSA-hpv8-x276-m59f | vLLM Vulnerable to Remote DoS via Special-Token Placeholders |
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
Wed, 13 May 2026 13:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Tue, 12 May 2026 23:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Vllm-project
Vllm-project vllm |
|
| Vendors & Products |
Vllm-project
Vllm-project vllm |
Tue, 12 May 2026 20:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0. | |
| Title | vLLM: Remote DoS via Special-Token Placeholders | |
| Weaknesses | CWE-129 | |
| References |
| |
| Metrics |
cvssV3_1
|
Projects
Sign in to view the affected projects.
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2026-05-13T12:24:53.560Z
Reserved: 2026-05-05T15:42:40.518Z
Link: CVE-2026-44222
Updated: 2026-05-13T12:24:49.160Z
Status : Awaiting Analysis
Published: 2026-05-12T20:16:43.160
Modified: 2026-05-13T18:16:08.537
Link: CVE-2026-44222
No data.
OpenCVE Enrichment
Updated: 2026-05-12T22:45:15Z
Github GHSA