[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1063673: ITP: llama.cpp -- Inference of Meta's LLaMA model (and others) in pure C/C++



[Christian Kastner]
> I'm open for better ideas, though.

I find in general that programs written with run time selection of
optimizations are far superiour to per host compilations, at least from
a system administration viewpoint.  I guess such approach would require
rewriting llama.cpp, and have no idea how much work it would be.

I look forward to having a look at your git repo to see if there is
something there I can learn from for the whisper.cpp packaging.

-- 
Happy hacking
Petter Reinholdtsen


Reply to: