[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1038326: ITP: transformers -- State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow (it ships LLMs)



Package: wnpp
Severity: wishlist
Owner: Mo Zhou <lumin@debian.org>
X-Debbugs-Cc: debian-devel@lists.debian.org, debian-ai@lists.debian.org

* Package name    : transformers
  Upstream Contact: HuggingFace
* URL             : https://github.com/huggingface/transformers
* License         : Apache-2.0
  Description     : State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

I've been using this for a while.

This package provides a convenient way for people to download and run an LLM locally.
Basically, if you want to run an instruct fine-tuned large language model with 7B parameters,
you will need at least 16GB of CUDA memory for inference in half/bfloat16 precision.
I have not tried to run any LLM with > 3B parameters with CPU ... that can be slow.
LLaMa.cpp is a good choice for running LLM on CPU, but that library supports less models
than this one. Meanwhile, the cpp library only supports inference.

I don't know how many dependencies are still missing, but that should not be too much.
Jax and TensorFlow are optional dependencies so they can be missing from our archive.
But anyway, I think running a large language model locally with Debian packages will
be interesting. The CUDA version of PyTorch is already in the NEW queue.

That said, this is actually a very comprehensive library, which provides far more functionalities
than running LLMs.

Thank you for using reportbug


Reply to: