| ID |
CVE-2026-27893
|
| Sažetak |
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue. |
| Reference |
|
| CVSS |
| Base: | 8.8 |
| Impact: | 5.9 |
| Exploitability: | 2.8 |
|
| Pristup |
| Vektor | Složenost | Autentikacija |
| NETWORK |
LOW |
NONE |
|
| Impact |
| Povjerljivost | Cjelovitost | Dostupnost |
| HIGH |
HIGH |
HIGH |
|
| CVSS vektor |
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H |
| Zadnje važnije ažuriranje |
27-03-2026 - 00:16 |
| Objavljeno |
27-03-2026 - 00:16 |