ID |
CVE-2025-47277
|
Sažetak |
vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer integration with the V0 engine. No other configurations are affected. vLLM supports the use of the `PyNcclPipe` class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the `PyNcclCommunicator` class, while CPU-side control message passing is handled via the `send_obj` and `recv_obj` methods on the CPU side. The intention was that this interface should only be exposed to a private network using the IP address specified by the `--kv-ip` CLI parameter. The vLLM documentation covers how this must be limited to a secured network. The default and intentional behavior from PyTorch is that the `TCPStore` interface listens on ALL interfaces, regardless of what IP address is provided. The IP address given was only used as a client-side address to use. vLLM was fixed to use a workaround to force the `TCPStore` instance to bind its socket to a specified private interface. As of version 0.8.5, vLLM limits the `TCPStore` socket to the private interface as configured. |
Reference |
|
CVSS |
Base: | 9.8 |
Impact: | 5.9 |
Exploitability: | 3.9 |
|
Pristup |
Vektor | Složenost | Autentikacija |
NETWORK |
LOW |
NONE |
|
Impact |
Povjerljivost | Cjelovitost | Dostupnost |
HIGH |
HIGH |
HIGH |
|
CVSS vektor |
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H |
Zadnje važnije ažuriranje |
21-05-2025 - 20:24 |
Objavljeno |
20-05-2025 - 18:15 |