ID |
CVE-2025-53630
|
Sažetak |
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579. |
Reference |
|
CVSS |
Base: | 0.0 |
Impact: | None |
Exploitability: | None |
|
Pristup |
Vektor | Složenost | Autentikacija |
None |
None |
None |
|
Impact |
Povjerljivost | Cjelovitost | Dostupnost |
None |
None |
None |
|
CVSS vektor |
None |
Zadnje važnije ažuriranje |
10-07-2025 - 20:15 |
Objavljeno |
10-07-2025 - 20:15 |