Bug#1108113: llama.cpp: CVE-2025-49847
Source: llama.cpp
Version: 5318+dfsg-1
Severity: grave
Tags: security upstream
X-Debbugs-Cc: carnil@debian.org, Debian Security Team <team@security.debian.org>
Hi,
The following vulnerability was published for llama.cpp.
CVE-2025-49847[0]:
| llama.cpp is an inference of several LLM models in C/C++. Prior to
| version b5662, an attacker‐supplied GGUF model vocabulary can
| trigger a buffer overflow in llama.cpp’s vocabulary‐loading code.
| Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp:
| llama_vocab::impl::token_to_piece() casts a very large size_t token
| length into an int32_t, causing the length check (if (length <
| (int32_t)size)) to be bypassed. As a result, memcpy is still called
| with that oversized size, letting a malicious model overwrite memory
| beyond the intended buffer. This can lead to arbitrary memory
| corruption and potential code execution. This issue has been patched
| in version b5662.
If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.
For further information see:
[0] https://security-tracker.debian.org/tracker/CVE-2025-49847
https://www.cve.org/CVERecord?id=CVE-2025-49847
[1] https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8wwf-w4qm-gpqr
[2] https://github.com/ggml-org/llama.cpp/commit/3cfbbdb44e08fd19429fed6cc85b982a91f0efd5
Regards,
Salvatore
Reply to: