ID |
CVE-2025-52566
|
Sažetak |
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. |
Reference |
|
CVSS |
Base: | 8.6 |
Impact: | 6.0 |
Exploitability: | 1.8 |
|
Pristup |
Vektor | Složenost | Autentikacija |
LOCAL |
LOW |
NONE |
|
Impact |
Povjerljivost | Cjelovitost | Dostupnost |
HIGH |
HIGH |
HIGH |
|
CVSS vektor |
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H |
Zadnje važnije ažuriranje |
26-06-2025 - 18:58 |
Objavljeno |
24-06-2025 - 04:15 |