| ID |
CVE-2025-66448
|
| Sažetak |
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1. |
| Reference |
|
| CVSS |
| Base: | 7.1 |
| Impact: | 5.9 |
| Exploitability: | 1.2 |
|
| Pristup |
| Vektor | Složenost | Autentikacija |
| NETWORK |
HIGH |
LOW |
|
| Impact |
| Povjerljivost | Cjelovitost | Dostupnost |
| HIGH |
HIGH |
HIGH |
|
| CVSS vektor |
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H |
| Zadnje važnije ažuriranje |
02-12-2025 - 17:16 |
| Objavljeno |
01-12-2025 - 23:15 |