Re: llama.cpp, whisper.cpp, ggml: Next steps
I managed to get the whisper.cpp git repo on salsa to build with the
non-embedded ggml package. Please have a look at
<URL: https://salsa.debian.org/deeplearning-team/whisper.cpp >.
Note, I am conviced 2010-ext-ggml.patch need a rewrite by someone who
understand cmake more than me, as its way of linking to the system
libraries is a bit nasty, and I would rather use -L/usr/lib... instead
of listing the library path directly. Also, I suspect
2020-llama-newer-ggml.patch will become obsolete once upstream updates
the talk-llama copy of llama, as it is a hack to build with a newer
version of ggml.
--
Happy hacking
Petter Reinholdtsen
Reply to:
- References:
- llama.cpp, whisper.cpp, ggml: Next steps
- From: Christian Kastner <ckk@debian.org>
- Re: llama.cpp, whisper.cpp, ggml: Next steps
- From: Petter Reinholdtsen <pere@hungry.com>
- Re: llama.cpp, whisper.cpp, ggml: Next steps
- From: Christian Kastner <ckk@debian.org>
- Re: llama.cpp, whisper.cpp, ggml: Next steps
- From: Petter Reinholdtsen <pere@hungry.com>
- Re: llama.cpp, whisper.cpp, ggml: Next steps
- From: Christian Kastner <ckk@debian.org>