[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#1094806: ITP: ollama -- large language model tools



Hi Simon,

On 2025-01-31 23:55, Simon Josefsson wrote:
> Do you have any general thoughts around packaging ollama?

Not yet, as I didn't have much contact with it yet.

However, there was a recent HN thread [1] on ollama and I was quite
surprised to read some pretty negative comments.

> I started looking at the llama.cpp code that is inside ollama and I have
> many concerns.  Just writing the debian/copyright file didn't feel fun.

Yeah, I can imagine.

> Ollama intentionally patch upstream llama before use, see:
> 
> https://salsa.debian.org/jas/ollama/-/blob/debian/sid/llama/README.md?ref_type=heads#vendoring
> https://salsa.debian.org/jas/ollama/-/tree/debian/sid/llama/patches?ref_type=heads
> 
> I didn't review any of them yet.  Any thoughts?  Could some be
> upstreamed, or made tunable somehow so that ollama could get what it
> wants from llama.cpp but not disturb other users?

*If* that's somehow possible, then I would really recommend this, if
only because there are so many llama.cpp backends (to supports various
CPUs, GPUs, compute stacks).

> Do you think it is possible for ollama to use a llama.cpp packaged
> outside of ollama?  The entire package is complex and I haven't
> familiarized myself with it.  My perception is that it builds llama.cpp
> in a way that fits ollama and integrate it into the ollama Go build.
> Finding some way to revert or parametrize that integration would be
> nice, so that ollama could use the Debian-packaged llama.cpp code.  But
> I'm not sure how to do that, given the patches above and generally how
> this is all integrated together.  And different release schedules.

Honestly, I don't know. I only looked at some of the patches and some of
them seem trivial, eg [2].

> I am very happy if you or others from the debian-ai team can help on
> ollama, or join the debian-ai to maintain it there.

Join us! We team-maintain quite a few AI/ML related packages. The Debian
ROCm Team is also part of debian-ai.

> I'm certainly no llama expert and the more I look into this package,
> the more I think such knowledge will be useful.  I put it into the
> Go team because that's what I'm familiar with, and because ollama is
> written in Go.  But the llama relationship has to be resolved
> somehow.
Hm, instinctively I would have placed it in the Go team as well, as that
is where all the work would be done. But then again, the product is
definitely an "AI" tool.

I guess it's your pick. Either way, I'd be happy to see if I can
contribute, as I wanted to look into Go packaging too, at some point.

Best,
Christian

[1]: https://news.ycombinator.com/item?id=42886680

[2]: https://salsa.debian.org/jas/ollama/-/blob/debian/sid/llama/patches/0007-blas.patch?ref_type=heads


Reply to: