[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1109124: llama.cpp: CVE-2025-53630



Hi Christian,

On Sun, Jul 13, 2025 at 08:45:02AM +0200, Christian Kastner wrote:
> Hi Salvatore,
> 
> On 2025-07-13 07:49, Salvatore Bonaccorso wrote:
> > On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote:
> >> Nevertheless, I really need to figure out a better way to deal with
> >> llama.cpp, whisper.cpp, and ggml triad. Re-embedding isn't an option as
> >> the ggml build is already pretty complicated by itself, adding another
> >> layer would be a pain.
> > 
> > Thanks. The gguf.cpp as emmbedded in llama.cpp is compiled and used,
> > is that correct? Do we use the external ggml in the system?
> 
> That's how llama.cpp is primarily developed distributed, but for Debian
> we ignore the embedded version and build a standalone src:ggml.

Ack.

> This is mainly because the ggml build is already quite complex (multiple
> CPU and GPU backends), and redundant between llama.cpp and whisper.cpp
> (still in NEW).
> 
> llama.cpp and whisper.cpp should eventually +ds clean their embedded
> copies, I just want to wait a bit more to see if a standalone src:ggml
> remains viable.

Ack, I think that would clarify the situation bit more, but balance it
with feasibility.

Regards,
Salvatore


Reply to: