Re: GPT4All
Hi Sharon,
Sharon Kimble <boudiccas@skimble09.plus.com> writes:
> I've now done a bare-back installation of debian 13, and I can now run
> chat okay, but when I run it from the commandline it shows these
> errors;-
>
> ````
> Failed to load libllamamodel-mainline-cuda.so: dlopen: libcuda.so.1: cannot open shared object file: No such file or directory
> Failed to load libllamamodel-mainline-cuda-avxonly.so: dlopen: libcuda.so.1: cannot open shared object file: No such file or directory
> constructGlobalLlama: could not find Llama implementation for backend: cuda
> constructGlobalLlama: could not find Llama implementation for backend: cuda
> [Warning] (Fri Aug 22 12:31:56 2025):
> qrc:/gpt4all/qml/AddModelView.qml:139:13: QML AddHFModelView: Detected
> anchors on an item that is managed by a layout. This is undefined
> behavior; use Layout.alignment instead.
> ````
>
> So it seems that I'm missing some libraries?
>
> Can anyone tell me what I need to install to get rod of these errors please?
Do you have an Nvidia graphics card? If not, you can't in general run
anything which requires CUDA, Nvidia's proprietary programming model and
API for their GPU cards, although someone does seem to have written a
compatibility layer for AMD cards:
https://github.com/BillowStudios/ZLUDA
CUDA installations guides for Linux (including Debian) are available
here:
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Cheers,
Loris
--
This signature is currently under constuction.
Reply to:
- References:
- GPT4All
- From: Sharon Kimble <boudiccas@skimble09.plus.com>
- Re: GPT4All
- From: Nicolas George <george@nsup.org>
- Re: GPT4All
- From: Sharon Kimble <boudiccas@skimble09.plus.com>
- Re: GPT4All
- From: Sharon Kimble <boudiccas@skimble09.plus.com>