Bug#1125060: llama.cpp: CVE-2026-21869
Source: llama.cpp
X-Debbugs-CC: team@security.debian.org
Severity: important
Tags: security
Hi,
The following vulnerability was published for llama.cpp.
CVE-2026-21869[0]:
| llama.cpp is an inference of several LLM models in C/C++. In commits
| 55d4206c8 and prior, the n_discard parameter is parsed directly from
| JSON input in the llama.cpp server's completion endpoints without
| validation to ensure it's non-negative. When a negative value is
| supplied and the context fills up, llama_memory_seq_rm/add receives
| a reversed range and negative offset, causing out-of-bounds memory
| writes in the token evaluation loop. This deterministic memory
| corruption can crash the process or enable remote code execution
| (RCE). There is no fix at the time of publication.
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c
If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.
For further information see:
[0] https://security-tracker.debian.org/tracker/CVE-2026-21869
https://www.cve.org/CVERecord?id=CVE-2026-21869
Please adjust the affected versions in the BTS as needed.
Reply to: