[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: KVM: GPU passthrough



ok it worked now. I reduced the ram size I gave for the GPU. But I saw
errors like the following.


---% kernel_err:

[    9.487622] r8169 0000:02:00.0: firmware: failed to load
rtl_nic/rtl8168d-2.fw (-2)
[    9.487697] firmware_class: See https://wiki.debian.org/Firmware
for information about missing firmware
[ 1159.047398] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1159.047534] handlers:
[ 1159.047572] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1164.024714] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1164.024846] handlers:
[ 1164.024883] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1268.843310] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1268.843448] handlers:
[ 1268.843487] [<000000007029899b>] usb_hcd_irq [usbcore]
[ 1323.645066] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 1323.645198] handlers:
[ 1323.645236] [<000000007029899b>] usb_hcd_irq [usbcore]
root@homeKvm:~#

On Fri, Apr 30, 2021 at 7:36 PM Gokan Atmaca <linux.gokan@gmail.com> wrote:
>
> system boots up but freezes. It just stays like that. I guess the
> problem is with the hardware.
>
>
>
> On Tue, Apr 27, 2021 at 6:14 PM Christian Seiler <christian@iwakd.de> wrote:
> >
> > Hi there,
> >
> > Am 2021-04-09 00:37, schrieb Gokan Atmaca:
> > > error:
> > > pci,host=0000:01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio
> > > 0000:01:00.0: group 1 is not viable
> > > Please ensure all devices within the iommu_group are bound to their
> > > vfio bus driver.
> >
> > This is a known issue with PCIe passthrough: depending on your
> > mainboard and CPU, some PCIe devices will be grouped together,
> > and you will either be able to forward _all_ devices in the
> > group to the VM or none at all.
> >
> > (If you have a "server" GPU that supports SR-IOV you'd have
> > additional options, but that doesn't appear to be the case.)
> >
> > This will highly depend on the PCIe slot the card is in, as well
> > as potentially some BIOS/UEFI settings on PCIe lane distribution.
> >
> > First let's find out what devices are in the same IOMMU group.
> >  From your kernel log:
> >
> > [    0.592011] pci 0000:00:01.0: Adding to iommu group 1
> > [    0.594091] pci 0000:01:00.0: Adding to iommu group 1
> > [    0.594096] pci 0000:01:00.1: Adding to iommu group 1
> >
> > Could you check with "lspci" what these devices are in your case?
> >
> > If you are comfortable forwarding the other two devices into the
> > VM as well, just add that to the list of passthrough devices,
> > then this should work.
> >
> > If you need the other two devices on the host, then you need to
> > either put the GPU into a different PCIe slot, put the other
> > devices into a different PCIe slot, or find some BIOS/UEFI setting
> > for PCIe lane management that separates the devices in question
> > into different IOMMU groups implicitly. (BIOS/UEFI settings will
> > typically not mention IOMMU groups at all, so look for "lane
> > management" or "lane distribution" or something along those
> > lines. You might need to drop some PCIe lanes from other devices
> > and give them directly to the GPU you want to pass through in
> > order for this to work, or vice-versa, depending on the specific
> > situation.)
> >
> > Note: the GUI tool "lstopo" from the package "hwloc" is _very_
> > useful to identify how the PCIe devices are organized in your
> > system and may give you a clue as to why your system is grouped
> > together in the way it is.
> >
> > Hope that helps.
> >
> > Regards,
> > Christian
> >


Reply to: