[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1109124: llama.cpp: CVE-2025-53630



On 2025-07-11 21:19, Salvatore Bonaccorso wrote:
> The following vulnerability was published for llama.cpp.
> 
> CVE-2025-53630[0]:
> | llama.cpp is an inference of several LLM models in C/C++. Integer
> | Overflow in the gguf_init_from_file_impl function in
> | ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write.

This is a bit of an interesting situation as the fix went into the ggml
embedded into llama.cpp, but it hasn't been synced up to main ggml yet.
And because there is also an ABI break, the newest llama.cpp doesn't
build with the old ggml. 

I'll ask upstream for a sync, and to automatically do so in future if a
CVE gets reported.

Nevertheless, I really need to figure out a better way to deal with
llama.cpp, whisper.cpp, and ggml triad. Re-embedding isn't an option as
the ggml build is already pretty complicated by itself, adding another
layer would be a pain.

Best,
Christian


Reply to: