Bug#1063673: ITP: llama.cpp -- Inference of Meta's LLaMA model (and others) in pure C/C++
[M. Zhou]
> I second this. llama-server is also the service endpoint for DebGPT.
So, what exactly need to happen for llama-server to be included in the
package?
I found this in d/copyright:
DFSG compliance
---------------
The server example contains a number of minified and generated files in the
frontend. These seem to be essential to the example, so the server example
has been removed entirely, for now.
I guess some build mechanics need to be included to build the minified
and generated files, but do not know which one are the problem.
According to examples/server/README.md the "Web UI" is buitl using 'npm
run build', so I guess some nodejs dependencies are needed. Sadly, I do
not know how to convince npm to not download random stuff from the
Internet.
--
Happy hacking
Petter Reinholdtsen
Reply to: