[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1035564: marked as done (unblock: lxd/5.0.2-5)



Your message dated Sat, 06 May 2023 12:43:01 +0000
with message-id <E1pvHFd-008eqW-FA@respighi.debian.org>
and subject line unblock lxd
has caused the Debian Bug report #1035564,
regarding unblock: lxd/5.0.2-5
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
1035564: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1035564
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock
Control: affects -1 + src:lxd

Please unblock package lxd

[ Reason ]
Added a missing Replaces in d/control for bin:lxd-client to address an
upgrade edge case from bullseye where lxc was already installed and
some other package being upgraded Depends/Recommends lxd which could
potentially result in a dpkg error (see #1034971).

lxd has autopkgtests, but as we're now within 20 days of the full
freeze, a manual unblock is required.

[ Impact ]
There's a chance that an upgrade from bullseye to bookworm might fail.

Additionally, this upload of lxd to unstable reset the migration
counter before lxd 5.0.2-4 was able to migrate with a fix for running
with the version of qemu in testing.

[ Tests ]
I manually tested installing lxd-client on a bullseye system with lxc
installed:

> Preparing to unpack lxd-client_5.0.2-3_amd64.deb ...
> Unpacking lxd-client (5.0.2-3) ...
> dpkg: error processing archive lxd-client_5.0.2-3_amd64.deb (--install):
>  trying to overwrite '/usr/share/bash-completion/completions/lxc', which is also in package lxc 1:4.0.6-2+deb11u2
> dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

With the fix:

> Preparing to unpack .../lxd-client_5.0.2-5_amd64.deb ...
> Unpacking lxd-client (5.0.2-5) ...
> Replacing files in old package lxc (1:4.0.6-2+deb11u2) ...

[ Risks ]
I don't believe there are any risks with this change.

[ Checklist ]
  [x] all changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in testing

[ Other info ]
As mentioned, lxd 5.0.2-4 had been on track to automatically migrate
around May 14 with a fix for use with the version of qemu in testing
(see #1030365). Since the requested debdiff is against the version of
lxd in testing, it contains changes from both -4 and -5.

unblock lxd/5.0.2-5
diff -Nru lxd-5.0.2/debian/changelog lxd-5.0.2/debian/changelog
--- lxd-5.0.2/debian/changelog	2023-03-08 02:27:33.000000000 +0000
+++ lxd-5.0.2/debian/changelog	2023-05-05 12:42:21.000000000 +0000
@@ -1,3 +1,16 @@
+lxd (5.0.2-5) unstable; urgency=medium
+
+  * Add missing Replaces in d/control for lxd-client to fix potential bullseye
+    upgrade issue (Closes: #1034971)
+
+ -- Mathias Gibbens <gibmat@debian.org>  Fri, 05 May 2023 12:42:21 +0000
+
+lxd (5.0.2-4) unstable; urgency=medium
+
+  * Cherry-pick upstream fix for qemu >= 7.2 (Closes: #1030365)
+
+ -- Mathias Gibbens <gibmat@debian.org>  Sun, 23 Apr 2023 17:50:08 +0000
+
 lxd (5.0.2-3) unstable; urgency=medium
 
   * Cherry-pick upstream commit to fix issue with btrfs-progs >= 6.0
diff -Nru lxd-5.0.2/debian/control lxd-5.0.2/debian/control
--- lxd-5.0.2/debian/control	2023-03-08 02:27:33.000000000 +0000
+++ lxd-5.0.2/debian/control	2023-05-05 12:42:21.000000000 +0000
@@ -99,8 +99,9 @@
 Package: lxd-client
 # The lxc binary doesn't depend on liblxc1, so it can be built for any architecture
 Architecture: any
-# lxc prior to 5.0.0 shipped a file that conflicts with LXD (see #1010843)
-Breaks: lxc (<< 1:5.0.0)
+# lxc prior to 5.0.1 shipped a file that conflicts with LXD (see #1010843, #1034971)
+Breaks: lxc (<< 1:5.0.1)
+Replaces: lxc (<< 1:5.0.1)
 Depends: ${misc:Depends},
          ${shlibs:Depends}
 Built-Using: ${misc:Built-Using}
diff -Nru lxd-5.0.2/debian/patches/006-cherry-pick-btrfs-fix.patch lxd-5.0.2/debian/patches/006-cherry-pick-btrfs-fix.patch
--- lxd-5.0.2/debian/patches/006-cherry-pick-btrfs-fix.patch	2023-03-08 02:27:33.000000000 +0000
+++ lxd-5.0.2/debian/patches/006-cherry-pick-btrfs-fix.patch	2023-05-05 12:42:21.000000000 +0000
@@ -1,3 +1,6 @@
+From: Mathias Gibbens <gibmat@debian.org>
+Description: Cherry-pick upstream commit to fix issue with btrfs-progs >= 6.0
+Origin: https://github.com/lxc/lxd/pull/11333
 From e7c852e43c0479060e630adb50342d2552a6cdad Mon Sep 17 00:00:00 2001
 From: Thomas Parrott <thomas.parrott@canonical.com>
 Date: Tue, 7 Feb 2023 10:04:27 +0000
diff -Nru lxd-5.0.2/debian/patches/007-cherry-pick-qemu-fix.patch lxd-5.0.2/debian/patches/007-cherry-pick-qemu-fix.patch
--- lxd-5.0.2/debian/patches/007-cherry-pick-qemu-fix.patch	1970-01-01 00:00:00.000000000 +0000
+++ lxd-5.0.2/debian/patches/007-cherry-pick-qemu-fix.patch	2023-05-05 12:42:21.000000000 +0000
@@ -0,0 +1,90 @@
+From: Mathias Gibbens <gibmat@debian.org>
+Description: Cherry-pick upstream fix for qemu >= 7.2, rebase, and drop SEV features not in current LTS release
+Origin: https://github.com/lxc/lxd/pull/11594
+diff --git a/lxd/instance/drivers/driver_qemu.go b/lxd/instance/drivers/driver_qemu.go
+index 9dcdd9da7..08211b034 100644
+--- a/lxd/instance/drivers/driver_qemu.go
++++ b/lxd/instance/drivers/driver_qemu.go
+@@ -2878,17 +2878,11 @@ func (d *qemu) generateQemuConfigFile(mountInfo *storagePools.MountInfo, busName
+ // addCPUMemoryConfig adds the qemu config required for setting the number of virtualised CPUs and memory.
+ // If sb is nil then no config is written.
+ func (d *qemu) addCPUMemoryConfig(cfg *[]cfgSection) error {
+-	drivers := DriverStatuses()
+-	info := drivers[instancetype.VM].Info
+-	if info.Name == "" {
+-		return fmt.Errorf("Unable to ascertain QEMU version")
+-	}
+-
+ 	// Figure out what memory object layout we're going to use.
+ 	// Before v6.0 or if version unknown, we use the "repeated" format, otherwise we use "indexed" format.
+ 	qemuMemObjectFormat := "repeated"
+ 	qemuVer6, _ := version.NewDottedVersion("6.0")
+-	qemuVer, _ := version.NewDottedVersion(info.Version)
++	qemuVer, _ := d.version()
+ 	if qemuVer != nil && qemuVer.Compare(qemuVer6) >= 0 {
+ 		qemuMemObjectFormat = "indexed"
+ 	}
+@@ -3154,11 +3148,9 @@ func (d *qemu) addDriveConfig(bootIndexes map[string]int, driveConf deviceConfig
+ 	isRBDImage := strings.HasPrefix(driveConf.DevPath, device.RBDFormatPrefix)
+ 
+ 	// Check supported features.
+-	drivers := DriverStatuses()
+-	info := drivers[d.Type()].Info
+-
+ 	// Use io_uring over native for added performance (if supported by QEMU and kernel is recent enough).
+ 	// We've seen issues starting VMs when running with io_ring AIO mode on kernels before 5.13.
++	info := DriverStatuses()[instancetype.VM].Info
+ 	minVer, _ := version.NewDottedVersion("5.13.0")
+ 	if shared.StringInSlice(device.DiskIOUring, driveConf.Opts) && shared.StringInSlice("io_uring", info.Features) && d.state.OS.KernelVersion.Compare(minVer) >= 0 {
+ 		aioMode = "io_uring"
+@@ -3592,12 +3584,21 @@ func (d *qemu) addNetDevConfig(busName string, qemuDev map[string]string, bootIn
+ 			qemuNetDev := map[string]any{
+ 				"id":         fmt.Sprintf("%s%s", qemuNetDevIDPrefix, escapedDeviceName),
+ 				"type":       "tap",
+-				"vhost":      true,
++				"vhost":      false, // This is selectively enabled based on QEMU version later.
+ 				"script":     "no",
+ 				"downscript": "no",
+ 				"ifname":     nicName,
+ 			}
+ 
++			// vhost-net network accelerator is causing asserts since QEMU 7.2.
++			// Until previous behaviour is restored or we figure out how to pass the veth device using
++			// file descriptors we will just disable the vhost-net accelerator.
++			qemuVer, _ := d.version()
++			vhostMaxVer, _ := version.NewDottedVersion("7.2")
++			if qemuVer != nil && qemuVer.Compare(vhostMaxVer) < 0 {
++				qemuNetDev["vhost"] = true
++			}
++
+ 			queueCount := configureQueues(len(cpus))
+ 			if queueCount > 0 {
+ 				qemuNetDev["queues"] = queueCount
+@@ -6501,6 +6502,17 @@ func (d *qemu) checkFeatures(hostArch int, qemuPath string) ([]string, error) {
+ 	return features, nil
+ }
+ 
++// version returns the QEMU version.
++func (d *qemu) version() (*version.DottedVersion, error) {
++	info := DriverStatuses()[instancetype.VM].Info
++	qemuVer, err := version.NewDottedVersion(info.Version)
++	if err != nil {
++		return nil, fmt.Errorf("Failed parsing QEMU version: %w", err)
++	}
++
++	return qemuVer, nil
++}
++
+ func (d *qemu) Metrics(hostInterfaces []net.Interface) (*metrics.MetricSet, error) {
+ 	if d.agentMetricsEnabled() {
+ 		metrics, err := d.getAgentMetrics()
+@@ -6764,8 +6776,6 @@ func (d *qemu) setCPUs(count int) error {
+ 
+ func (d *qemu) architectureSupportsCPUHotplug() bool {
+ 	// Check supported features.
+-	drivers := DriverStatuses()
+-	info := drivers[d.Type()].Info
+-
++	info := DriverStatuses()[instancetype.VM].Info
+ 	return shared.StringInSlice("cpu_hotplug", info.Features)
+ }
diff -Nru lxd-5.0.2/debian/patches/series lxd-5.0.2/debian/patches/series
--- lxd-5.0.2/debian/patches/series	2023-03-08 02:27:33.000000000 +0000
+++ lxd-5.0.2/debian/patches/series	2023-05-05 12:42:21.000000000 +0000
@@ -3,3 +3,4 @@
 004-revert-use-of-go-criu.patch
 005-add-mips-aliases.patch
 006-cherry-pick-btrfs-fix.patch
+007-cherry-pick-qemu-fix.patch
diff -Nru lxd-5.0.2/debian/README.debian lxd-5.0.2/debian/README.debian
--- lxd-5.0.2/debian/README.debian	2023-03-08 02:27:33.000000000 +0000
+++ lxd-5.0.2/debian/README.debian	2023-05-05 12:42:21.000000000 +0000
@@ -61,10 +61,6 @@
 Known issues
 ============
 
-  * Virtual machine creation is broken with QEMU version 7.2 when utilizing the
-    default network profile. As of March 2023, a fix has not yet been
-    identified (see bug #1030365 for further details).
-
   * Running LXD and Docker on the same host can cause connectivity issues. A
 common reason for these issues is that Docker sets the FORWARD policy to DROP,
 which prevents LXD from forwarding traffic and thus causes the instances to lose

Attachment: signature.asc
Description: This is a digitally signed message part


--- End Message ---
--- Begin Message ---
Unblocked.

--- End Message ---

Reply to: