[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1112283: marked as done (trixie-pu: package nova/31.0.0-6)



Your message dated Sat, 15 Nov 2025 11:21:45 +0000
with message-id <736c7150dc08501cc89945035c406eaf9688e144.camel@adam-barratt.org.uk>
and subject line Closing requests for updates included in 13.2
has caused the Debian Bug report #1112283,
regarding trixie-pu: package nova/31.0.0-6
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
1112283: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1112283
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
Tags: trixie
X-Debbugs-Cc: nova@packages.debian.org
Control: affects -1 + src:nova
User: release.debian.org@packages.debian.org
Usertags: pu

Hi,

[ Reason ]
I'd like to update Nova in Trixie to close this bug:
https://bugs.debian.org/1111689

For details, you may read the release notes that are
in the attached debdiff, also available here:
https://opendev.org/openstack/nova/src/commit/a7e5377da4c0199443c76802d3dde494d7bea474/releasenotes/notes/bug-2112187-e1c1d40f090e421b.yaml

[ Impact ]
In some conditions, a volume may be mounted on a wrong
VM, giving access to data to a wrong customer.

[ Tests ]
Upstream has intensive unit and functional test that I
also run at build time and autopkgtest (for unit tests),
and on my own package-based CI (for functional tests).

[ Risks ]
Not much risk thanks to the testing described above.

[ Checklist ]
  [x] *all* changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in (old)stable
  [x] the issue is verified as fixed in unstable

Please allow me to upload nova/31.0.0-6+deb13u1 as per
attached debdiff.

Cheers,

Thomas Goirand (zigo)
diff -Nru nova-31.0.0/debian/changelog nova-31.0.0/debian/changelog
--- nova-31.0.0/debian/changelog	2025-07-12 11:35:02.000000000 +0200
+++ nova-31.0.0/debian/changelog	2025-08-21 09:10:49.000000000 +0200
@@ -1,3 +1,17 @@
+nova (2:31.0.0-6+deb13u1) trixie; urgency=high
+
+  * A vulnerability has been identified in OpenStack Nova and OpenStack Watcher
+    in conjunction with volume swap operations performed by the Watcher
+    service. Under specific circumstances, this can lead to a situation where
+    two Nova libvirt instances could reference the same block device, allowing
+    accidental information disclosure to the unauthorized instance. Added
+    upstream patch: OSSN-0094_restrict_swap_volume_to_cinder.patch.
+    (Closes: #1111689).
+  * Blacklist non-deterministic unit test:
+    - ComputeTestCase.test_add_remove_fixed_ip_updates_instance_updated_at
+
+ -- Thomas Goirand <zigo@debian.org>  Thu, 21 Aug 2025 09:10:49 +0200
+
 nova (2:31.0.0-6) unstable; urgency=medium
 
   * Also do it for nova-api-metadata.
diff -Nru nova-31.0.0/debian/patches/OSSN-0094_restrict_swap_volume_to_cinder.patch nova-31.0.0/debian/patches/OSSN-0094_restrict_swap_volume_to_cinder.patch
--- nova-31.0.0/debian/patches/OSSN-0094_restrict_swap_volume_to_cinder.patch	1970-01-01 01:00:00.000000000 +0100
+++ nova-31.0.0/debian/patches/OSSN-0094_restrict_swap_volume_to_cinder.patch	2025-08-21 09:10:49.000000000 +0200
@@ -0,0 +1,637 @@
+Author: Sean Mooney <work@seanmooney.info>
+Date: Fri, 15 Aug 2025 14:33:34 +0100
+Description: restrict swap volume to cinder
+ This change tightens the validation around the attachment
+ update API to ensure that it can only be called if the source
+ volume has a non empty migration status.
+ .
+ That means it will only accept a request to swap the volume if
+ it is the result of a cinder volume migration.
+ .
+ This change is being made to prevent the instance domain
+ XML from getting out of sync with the nova BDM records
+ and cinder connection info. In the future support for direct
+ swap volume actions can be re-added if and only if the
+ nova libvirt driver is updated to correctly modify the domain.
+ The libvirt driver is the only driver that supported this API
+ outside of a cinder orchestrated swap volume.
+ .
+ By allowing the domain XML and BDMs to get out of sync
+ if an admin later live-migrates the VM the host path will not be
+ modified for the destination host. Normally this results in a live
+ migration failure which often prompts the admin to cold migrate instead.
+ however if the source device path exists on the destination the migration
+ will proceed. This can lead to 2 VMs using the same host block device.
+ At best this will cause a crash or data corruption.
+ At worst it will allow one guest to access the data of another.
+ .
+ Prior to this change there was an explicit warning in nova API ref
+ stating that humans should never call this API because it can lead
+ to this situation. Now it considered a hard error due to the
+ security implications.
+Bug: https://launchpad.net/bugs/2112187
+Depends-on: https://review.opendev.org/c/openstack/tempest/+/957753
+Change-Id: I439338bd2f27ccd65a436d18c8cbc9c3127ee612
+Signed-off-by: Sean Mooney <work@seanmooney.info>
+Origin: upstream, https://review.opendev.org/c/openstack/nova/+/957759
+
+diff --git a/api-ref/source/os-volume-attachments.inc b/api-ref/source/os-volume-attachments.inc
+index 803d59d..bf5d627 100644
+--- a/api-ref/source/os-volume-attachments.inc
++++ b/api-ref/source/os-volume-attachments.inc
+@@ -185,16 +185,16 @@
+ .. note:: This action only valid when the server is in ACTIVE, PAUSED and RESIZED state,
+           or a conflict(409) error will be returned.
+ 
+-.. warning:: When updating volumeId, this API is typically meant to
+-             only be used as part of a larger orchestrated volume
+-             migration operation initiated in the block storage
+-             service via the ``os-retype`` or ``os-migrate_volume``
+-             volume actions. Direct usage of this API to update
+-             volumeId is not recommended and may result in needing to
+-             hard reboot the server to update details within the guest
+-             such as block storage serial IDs. Furthermore, updating
+-             volumeId via this API is only implemented by `certain
+-             compute drivers`_.
++.. Important::
++
++   When updating volumeId, this API **MUST**  only be used
++   as part of a larger orchestrated volume
++   migration operation initiated in the block storage
++   service via the ``os-retype`` or ``os-migrate_volume``
++   volume actions. Direct usage of this API is not supported
++   and will be blocked by nova with a 409 conflict.
++   Furthermore, updating ``volumeId`` via this API is only
++   implemented by `certain compute drivers`_.
+ 
+ .. _certain compute drivers: https://docs.openstack.org/nova/latest/user/support-matrix.html#operation_swap_volume
+ 
+diff --git a/nova/api/openstack/compute/volumes.py b/nova/api/openstack/compute/volumes.py
+index 3f5deaa..f04c3b7 100644
+--- a/nova/api/openstack/compute/volumes.py
++++ b/nova/api/openstack/compute/volumes.py
+@@ -434,6 +434,12 @@
+         except exception.VolumeNotFound as e:
+             raise exc.HTTPNotFound(explanation=e.format_message())
+ 
++        if ('migration_status' not in old_volume or
++            old_volume['migration_status'] in (None, '')):
++            message = (f"volume {old_volume_id} is not migrating this api "
++                       "should only be called by Cinder")
++            raise exc.HTTPConflict(explanation=message)
++
+         new_volume_id = body['volumeAttachment']['volumeId']
+         try:
+             new_volume = self.volume_api.get(context, new_volume_id)
+diff --git a/nova/tests/fixtures/cinder.py b/nova/tests/fixtures/cinder.py
+index 049d22e..732c050 100644
+--- a/nova/tests/fixtures/cinder.py
++++ b/nova/tests/fixtures/cinder.py
+@@ -262,11 +262,15 @@
+                     'attachment_id': attachment['id'],
+                     'mountpoint': '/dev/vdb',
+                 }
+-
++            migration_status = (
++                None if volume_id not in (
++                    self.SWAP_OLD_VOL, self.SWAP_ERR_OLD_VOL)
++                else "migrating")
+             volume.update({
+                 'status': 'in-use',
+                 'attach_status': 'attached',
+                 'attachments': attachments,
++                'migration_status': migration_status
+             })
+         # Otherwise mark the volume as available and detached
+         else:
+diff --git a/nova/tests/functional/notification_sample_tests/test_instance.py b/nova/tests/functional/notification_sample_tests/test_instance.py
+index fcb2812..3e52590 100644
+--- a/nova/tests/functional/notification_sample_tests/test_instance.py
++++ b/nova/tests/functional/notification_sample_tests/test_instance.py
+@@ -1560,8 +1560,14 @@
+     def test_volume_swap_server_with_error(self):
+         server = self._do_setup_server_and_error_flag()
+ 
+-        self._volume_swap_server(server, self.cinder.SWAP_ERR_OLD_VOL,
+-                                 self.cinder.SWAP_ERR_NEW_VOL)
++        # This is calling swap volume but we are emulating cinder
++        # volume migrate in the fixture to allow this.
++        # i.e. this is simulating the workflow where you move a volume
++        # between cinder backend using a temp volume that cinder internally
++        # cleans up at the end of the migration.
++        self._volume_swap_server(
++            server, self.cinder.SWAP_ERR_OLD_VOL,
++            self.cinder.SWAP_ERR_NEW_VOL)
+         self._wait_for_notification('compute.exception')
+ 
+         # Eight versioned notifications are generated.
+@@ -1576,6 +1582,8 @@
+         self.assertLessEqual(7, len(self.notifier.versioned_notifications),
+                              'Unexpected number of versioned notifications. '
+                              'Got: %s' % self.notifier.versioned_notifications)
++        # the block device mapping is using SWAP_ERR_OLD_VOL because this is
++        # the cinder volume migrate workflow.
+         block_devices = [{
+             "nova_object.data": {
+                 "boot_index": None,
+diff --git a/nova/tests/functional/regressions/test_bug_1943431.py b/nova/tests/functional/regressions/test_bug_1943431.py
+index 69c900c..5e945de 100644
+--- a/nova/tests/functional/regressions/test_bug_1943431.py
++++ b/nova/tests/functional/regressions/test_bug_1943431.py
+@@ -16,6 +16,7 @@
+ 
+ from nova import context
+ from nova import objects
++from nova.tests.functional.api import client
+ from nova.tests.functional import integrated_helpers
+ from nova.tests.functional.libvirt import base
+ from nova.virt import block_device as driver_block_device
+@@ -46,6 +47,8 @@
+         self.start_compute()
+ 
+     def test_ro_multiattach_swap_volume(self):
++        # NOTE(sean-k-mooney): This test is emulating calling swap volume
++        # directly instead of using cinder volume migrate or retype.
+         server_id = self._create_server(networks='none')['id']
+         self.api.post_server_volume(
+             server_id,
+@@ -58,47 +61,13 @@
+         self._wait_for_volume_attach(
+             server_id, self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL)
+ 
+-        # Swap between the old and new volumes
+-        self.api.put_server_volume(
+-            server_id,
+-            self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL,
++        # NOTE(sean-k-mooney): because of bug 212187 directly using
++        # swap volume is not supported and should fail.
++        ex = self.assertRaises(
++            client.OpenStackApiException, self.api.put_server_volume,
++            server_id, self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL,
+             self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL)
+-
+-        # Wait until the old volume is detached and new volume is attached
+-        self._wait_for_volume_detach(
+-            server_id, self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL)
+-        self._wait_for_volume_attach(
+-            server_id, self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL)
+-
+-        bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
+-            context.get_admin_context(),
+-            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL,
+-            server_id)
+-        connection_info = jsonutils.loads(bdm.connection_info)
+-
+-        # Assert that only the new volume UUID is referenced within the stashed
+-        # connection_info and returned by driver_block_device.get_volume_id
+-        self.assertIn('volume_id', connection_info.get('data'))
+-        self.assertEqual(
+-            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL,
+-            connection_info['data']['volume_id'])
+-        self.assertIn('volume_id', connection_info)
+-        self.assertEqual(
+-            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL,
+-            connection_info['volume_id'])
+-        self.assertIn('serial', connection_info)
+-        self.assertEqual(
+-            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL,
+-            connection_info.get('serial'))
+-        self.assertEqual(
+-            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL,
+-            driver_block_device.get_volume_id(connection_info))
+-
+-        # Assert that the new volume can be detached from the instance
+-        self.api.delete_server_volume(
+-            server_id, self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL)
+-        self._wait_for_volume_detach(
+-            server_id, self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL)
++        self.assertIn("this api should only be called by Cinder", str(ex))
+ 
+     def test_ro_multiattach_migrate_volume(self):
+         server_id = self._create_server(networks='none')['id']
+diff --git a/nova/tests/functional/regressions/test_bug_2112187.py b/nova/tests/functional/regressions/test_bug_2112187.py
+new file mode 100644
+index 0000000..276e919
+--- /dev/null
++++ b/nova/tests/functional/regressions/test_bug_2112187.py
+@@ -0,0 +1,67 @@
++# Licensed under the Apache License, Version 2.0 (the "License"); you may
++# not use this file except in compliance with the License. You may obtain
++# a copy of the License at
++#
++#      http://www.apache.org/licenses/LICENSE-2.0
++#
++# Unless required by applicable law or agreed to in writing, software
++# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
++# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
++# License for the specific language governing permissions and limitations
++# under the License.
++
++from nova.tests.functional.api import client
++from nova.tests.functional import integrated_helpers
++from nova.tests.functional.libvirt import base
++
++
++class TestDirectSwapVolume(
++    base.ServersTestBase,
++    integrated_helpers.InstanceHelperMixin
++):
++    """Regression test for bug 2112187
++
++    During a Cinder orchestrated volume migration nova leaves the
++    stashed connection_info of the attachment pointing at the original
++    volume UUID used during the migration because cinder will atomically
++    revert the UUID of the volume back to the original value.
++
++    When swap volume is used directly the uuid should be updated
++    in the libvirt xml but nova does not support that today.
++    That results in the uuid in the xml and the uuid in the BDMs
++    being out of sync.
++
++    As a result it is unsafe to allow direct swap volume.
++    """
++
++    microversion = 'latest'
++    ADMIN_API = True
++
++    def setUp(self):
++        super().setUp()
++        self.start_compute()
++
++    def test_direct_swap_volume(self):
++        # NOTE(sean-k-mooney): This test is emulating calling swap volume
++        # directly instead of using cinder volume migrate or retype.
++        server_id = self._create_server(networks='none')['id']
++        # We do not need to use a multiattach volume but any volume
++        # that does not have a migration state set will work.
++        self.api.post_server_volume(
++            server_id,
++            {
++                'volumeAttachment': {
++                    'volumeId': self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL
++                }
++            }
++        )
++        self._wait_for_volume_attach(
++            server_id, self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL)
++
++        # NOTE(sean-k-mooney): because of bug 212187 directly using
++        # swap volume is not supported and should fail.
++        ex = self.assertRaises(
++            client.OpenStackApiException, self.api.put_server_volume,
++            server_id, self.cinder.MULTIATTACH_RO_SWAP_OLD_VOL,
++            self.cinder.MULTIATTACH_RO_SWAP_NEW_VOL)
++        self.assertIn("this api should only be called by Cinder", str(ex))
+diff --git a/nova/tests/unit/api/openstack/compute/test_volumes.py b/nova/tests/unit/api/openstack/compute/test_volumes.py
+index d3d5c35..6ff33a3 100644
+--- a/nova/tests/unit/api/openstack/compute/test_volumes.py
++++ b/nova/tests/unit/api/openstack/compute/test_volumes.py
+@@ -15,14 +15,16 @@
+ #    under the License.
+ 
+ import datetime
+-from unittest import mock
+ import urllib
+ 
++from unittest import mock
++
+ import fixtures
++import webob
++
+ from oslo_serialization import jsonutils
+ from oslo_utils import encodeutils
+ from oslo_utils.fixture import uuidsentinel as uuids
+-import webob
+ from webob import exc
+ 
+ from nova.api.openstack import api_version_request
+@@ -65,17 +67,28 @@
+     return fake_instance.fake_instance_obj(
+         context, id=1, uuid=instance_id, project_id=context.project_id)
+ 
++# TODO(sean-k-mooney): this is duplicated in the policy tests
++# we should consider consolidating this.
++
+ 
+ def fake_get_volume(self, context, id):
++    migration_status = None
+     if id == FAKE_UUID_A:
+         status = 'in-use'
+         attach_status = 'attached'
+     elif id == FAKE_UUID_B:
+         status = 'available'
+         attach_status = 'detached'
++    elif id == uuids.source_swap_vol:
++        status = 'in-use'
++        attach_status = 'attached'
++        migration_status = 'migrating'
+     else:
+         raise exception.VolumeNotFound(volume_id=id)
+-    return {'id': id, 'status': status, 'attach_status': attach_status}
++    return {
++        'id': id, 'status': status, 'attach_status': attach_status,
++        'migration_status': migration_status
++    }
+ 
+ 
+ def fake_create_snapshot(self, context, volume, name, description):
+@@ -99,7 +112,7 @@
+ 
+ @classmethod
+ def fake_bdm_get_by_volume_and_instance(cls, ctxt, volume_id, instance_uuid):
+-    if volume_id != FAKE_UUID_A:
++    if volume_id not in (FAKE_UUID_A, uuids.source_swap_vol):
+         raise exception.VolumeBDMNotFound(volume_id=volume_id)
+     db_bdm = fake_block_device.FakeDbBlockDeviceDict({
+         'id': 1,
+@@ -110,7 +123,7 @@
+         'source_type': 'volume',
+         'destination_type': 'volume',
+         'snapshot_id': None,
+-        'volume_id': FAKE_UUID_A,
++        'volume_id': volume_id,
+         'volume_size': 1,
+         'attachment_id': uuids.attachment_id
+     })
+@@ -572,6 +585,7 @@
+             test.MatchType(objects.Instance),
+             {'attach_status': 'attached',
+              'status': 'in-use',
++             'migration_status': None,
+              'id': FAKE_UUID_A})
+ 
+     @mock.patch.object(compute_api.API, 'detach_volume')
+@@ -585,7 +599,8 @@
+             test.MatchType(objects.Instance),
+             {'attach_status': 'attached',
+              'status': 'in-use',
+-             'id': FAKE_UUID_A})
++             'id': FAKE_UUID_A,
++             'migration_status': None})
+ 
+     def test_attach_volume(self):
+         self.stub_out('nova.compute.api.API.attach_volume',
+@@ -739,7 +754,7 @@
+         self.assertRaises(exc.HTTPBadRequest, self.attachments.create,
+                           req, FAKE_UUID, body=body)
+ 
+-    def _test_swap(self, attachments, uuid=FAKE_UUID_A, body=None):
++    def _test_swap(self, attachments, uuid=uuids.source_swap_vol, body=None):
+         body = body or {'volumeAttachment': {'volumeId': FAKE_UUID_B}}
+         return attachments.update(self.req, uuids.instance, uuid, body=body)
+ 
+@@ -754,10 +769,13 @@
+             self.req.environ['nova.context'], test.MatchType(objects.Instance),
+             {'attach_status': 'attached',
+              'status': 'in-use',
+-             'id': FAKE_UUID_A},
++             'id': uuids.source_swap_vol,
++             'migration_status': 'migrating'
++             },
+             {'attach_status': 'detached',
+              'status': 'available',
+-             'id': FAKE_UUID_B})
++             'id': FAKE_UUID_B,
++             'migration_status': None})
+ 
+     @mock.patch.object(compute_api.API, 'swap_volume')
+     def test_swap_volume(self, mock_swap_volume):
+@@ -774,10 +792,12 @@
+             self.req.environ['nova.context'], test.MatchType(objects.Instance),
+             {'attach_status': 'attached',
+              'status': 'in-use',
+-             'id': FAKE_UUID_A},
++             'id': uuids.source_swap_vol,
++             'migration_status': 'migrating'},
+             {'attach_status': 'detached',
+              'status': 'available',
+-             'id': FAKE_UUID_B})
++             'id': FAKE_UUID_B,
++             'migration_status': None})
+ 
+     def test_swap_volume_with_nonexistent_uri(self):
+         self.assertRaises(exc.HTTPNotFound, self._test_swap,
+@@ -786,13 +806,14 @@
+     @mock.patch.object(cinder.API, 'get')
+     def test_swap_volume_with_nonexistent_dest_in_body(self, mock_get):
+         mock_get.side_effect = [
+-            None, exception.VolumeNotFound(volume_id=FAKE_UUID_C)]
++            fake_get_volume(None, None, uuids.source_swap_vol),
++            exception.VolumeNotFound(volume_id=FAKE_UUID_C)]
+         body = {'volumeAttachment': {'volumeId': FAKE_UUID_C}}
+         with mock.patch.object(self.attachments, '_update_volume_regular'):
+             self.assertRaises(exc.HTTPBadRequest, self._test_swap,
+                               self.attachments, body=body)
+         mock_get.assert_has_calls([
+-            mock.call(self.req.environ['nova.context'], FAKE_UUID_A),
++            mock.call(self.req.environ['nova.context'], uuids.source_swap_vol),
+             mock.call(self.req.environ['nova.context'], FAKE_UUID_C)])
+ 
+     def test_swap_volume_without_volumeId(self):
+@@ -823,8 +844,9 @@
+                           self.attachments)
+         if mock_bdm.called:
+             # New path includes regular PUT procedure
+-            mock_bdm.assert_called_once_with(self.req.environ['nova.context'],
+-                                             FAKE_UUID_A, uuids.instance)
++            mock_bdm.assert_called_once_with(
++                self.req.environ['nova.context'],
++                uuids.source_swap_vol, uuids.instance)
+             mock_swap_volume.assert_not_called()
+         else:
+             # Old path is pure swap-volume
+@@ -834,10 +856,12 @@
+                 test.MatchType(objects.Instance),
+                 {'attach_status': 'attached',
+                  'status': 'in-use',
+-                 'id': FAKE_UUID_A},
++                 'migration_status': 'migrating',
++                 'id': uuids.source_swap_vol},
+                 {'attach_status': 'detached',
+                  'status': 'available',
+-                 'id': FAKE_UUID_B})
++                 'id': FAKE_UUID_B,
++                 'migration_status': None})
+ 
+     def _test_list_with_invalid_filter(self, url):
+         req = self._build_request(url)
+@@ -1191,7 +1215,7 @@
+             self.context,
+             id=1,
+             instance_uuid=FAKE_UUID,
+-            volume_id=FAKE_UUID_A,
++            volume_id=uuids.source_swap_vol,
+             source_type='volume',
+             destination_type='volume',
+             delete_on_termination=False,
+@@ -1306,7 +1330,7 @@
+             self.context,
+             id=1,
+             instance_uuid=FAKE_UUID,
+-            volume_id=FAKE_UUID_A,
++            volume_id=uuids.source_swap_vol,
+             source_type='volume',
+             destination_type='volume',
+             delete_on_termination=False,
+@@ -1322,7 +1346,7 @@
+             'delete_on_termination': True,
+         }}
+         self.attachments.update(self.req, FAKE_UUID,
+-                                FAKE_UUID_A, body=body)
++                                uuids.source_swap_vol, body=body)
+         mock_bdm_save.assert_called_once()
+         self.assertTrue(vol_bdm['delete_on_termination'])
+         # Swap volume is tested elsewhere, just make sure that we did
+@@ -1339,7 +1363,7 @@
+             self.context,
+             id=1,
+             instance_uuid=FAKE_UUID,
+-            volume_id=FAKE_UUID_A,
++            volume_id=uuids.source_swap_vol,
+             source_type='volume',
+             destination_type='volume',
+             delete_on_termination=False,
+@@ -1354,7 +1378,7 @@
+         }}
+         req = self._get_req(body, microversion='2.84')
+         self.attachments.update(req, FAKE_UUID,
+-                                FAKE_UUID_A, body=body)
++                                uuids.source_swap_vol, body=body)
+         mock_swap.assert_called_once()
+         mock_bdm_save.assert_not_called()
+ 
+@@ -1640,6 +1664,7 @@
+                     'id': volume_id,
+                     'size': 1,
+                     'multiattach': True,
++                    'migration_status': 'migrating',
+                     'attachments': {
+                         uuids.server1: {
+                             'attachment_id': uuids.attachment_id1,
+@@ -1689,12 +1714,12 @@
+             ex = self.assertRaises(
+                 webob.exc.HTTPBadRequest, controller.update, req,
+                 uuids.server1, uuids.old_vol_id, body=body)
+-        self.assertIn('Swapping multi-attach volumes with more than one ',
+-                      str(ex))
+-        mock_attachment_get.assert_has_calls([
+-            mock.call(ctxt, uuids.attachment_id1),
+-            mock.call(ctxt, uuids.attachment_id2)], any_order=True)
+-        mock_roll_detaching.assert_called_once_with(ctxt, uuids.old_vol_id)
++            self.assertIn(
++                'Swapping multi-attach volumes with more than one ', str(ex))
++            mock_attachment_get.assert_has_calls([
++                mock.call(ctxt, uuids.attachment_id1),
++                mock.call(ctxt, uuids.attachment_id2)], any_order=True)
++            mock_roll_detaching.assert_called_once_with(ctxt, uuids.old_vol_id)
+ 
+ 
+ class CommonBadRequestTestCase(object):
+diff --git a/nova/tests/unit/policies/test_volumes.py b/nova/tests/unit/policies/test_volumes.py
+index 896881c..f4070a6 100644
+--- a/nova/tests/unit/policies/test_volumes.py
++++ b/nova/tests/unit/policies/test_volumes.py
+@@ -38,7 +38,7 @@
+ 
+ 
+ def fake_bdm_get_by_volume_and_instance(cls, ctxt, volume_id, instance_uuid):
+-    if volume_id != FAKE_UUID_A:
++    if volume_id not in (FAKE_UUID_A, uuids.source_swap_vol):
+         raise exception.VolumeBDMNotFound(volume_id=volume_id)
+     db_bdm = fake_block_device.FakeDbBlockDeviceDict(
+         {'id': 1,
+@@ -55,15 +55,23 @@
+ 
+ 
+ def fake_get_volume(self, context, id):
++    migration_status = None
+     if id == FAKE_UUID_A:
+         status = 'in-use'
+         attach_status = 'attached'
+     elif id == FAKE_UUID_B:
+         status = 'available'
+         attach_status = 'detached'
++    elif id == uuids.source_swap_vol:
++        status = 'in-use'
++        attach_status = 'attached'
++        migration_status = 'migrating'
+     else:
+         raise exception.VolumeNotFound(volume_id=id)
+-    return {'id': id, 'status': status, 'attach_status': attach_status}
++    return {
++        'id': id, 'status': status, 'attach_status': attach_status,
++        'migration_status': migration_status
++    }
+ 
+ 
+ class VolumeAttachPolicyTest(base.BasePolicyTest):
+@@ -163,9 +171,10 @@
+     def test_swap_volume_attach_policy(self, mock_swap_volume):
+         rule_name = self.policy_root % "swap"
+         body = {'volumeAttachment': {'volumeId': FAKE_UUID_B}}
+-        self.common_policy_auth(self.project_admin_authorized_contexts,
+-                                rule_name, self.controller.update,
+-                                self.req, FAKE_UUID, FAKE_UUID_A, body=body)
++        self.common_policy_auth(
++            self.project_admin_authorized_contexts,
++            rule_name, self.controller.update,
++            self.req, FAKE_UUID, uuids.source_swap_vol, body=body)
+ 
+     @mock.patch.object(block_device_obj.BlockDeviceMapping, 'save')
+     @mock.patch('nova.compute.api.API.swap_volume')
+@@ -198,9 +207,10 @@
+         req = fakes.HTTPRequest.blank('', version='2.85')
+         body = {'volumeAttachment': {'volumeId': FAKE_UUID_B,
+             'delete_on_termination': True}}
+-        self.common_policy_auth(self.project_admin_authorized_contexts,
+-                                rule_name, self.controller.update,
+-                                req, FAKE_UUID, FAKE_UUID_A, body=body)
++        self.common_policy_auth(
++            self.project_admin_authorized_contexts,
++            rule_name, self.controller.update,
++            req, FAKE_UUID, uuids.source_swap_vol, body=body)
+         mock_swap_volume.assert_called()
+         mock_bdm_save.assert_called()
+ 
+diff --git a/releasenotes/notes/bug-2112187-e1c1d40f090e421b.yaml b/releasenotes/notes/bug-2112187-e1c1d40f090e421b.yaml
+new file mode 100644
+index 0000000..bd7fc56
+--- /dev/null
++++ b/releasenotes/notes/bug-2112187-e1c1d40f090e421b.yaml
+@@ -0,0 +1,36 @@
++---
++security:
++  - |
++    Nova has documented that the ``update volume attachment`` API
++    PUT /servers/{server_id}/os-volume_attachments/{volume_id}
++    should not be called directly for a very long time.
++
++    "When updating volumeId, this API is typically meant to only
++    be used as part of a larger orchestrated volume migration
++    operation initiated in the block storage service via
++    the os-retype or os-migrate_volume volume actions.
++    Direct usage of this API to update volumeId is not recommended
++    and may result in needing to hard reboot the server
++    to update details within the guest such as block storage serial IDs.
++    Furthermore, updating volumeId via this API is only implemented
++    by certain compute drivers."
++
++    As an admin only api, direct usage has always been limited to admins
++    or service like ``watcher``.
++    This longstanding recommendation is now enforced as a security
++    hardening measure and restricted to only cinder.
++    The prior warning alluded to the fact that directly using this
++    api can result in a guest with a de-synced definition of the volume
++    serial. Before this change it was possible for an admin to unknowingly
++    put a VM in an inconsistent state such that a future live migration may
++    fail or succeed and break tenant isolation. This could not happen
++    when the api was called by cinder so Nova has restricted that api
++    exclusively to that use-case.
++    see: https://bugs.launchpad.net/nova/+bug/2112187 for details.
++
++fixes:
++  - |
++    ``Nova`` now strictly enforces that only ``cinder`` can call the
++    ``update volume attachment`` aka ``swap volume`` api. This is part
++    of addressing a security hardening gap identified as part of bug:
++    https://bugs.launchpad.net/nova/+bug/2112187
diff -Nru nova-31.0.0/debian/patches/series nova-31.0.0/debian/patches/series
--- nova-31.0.0/debian/patches/series	2025-07-12 11:35:02.000000000 +0200
+++ nova-31.0.0/debian/patches/series	2025-08-21 09:10:49.000000000 +0200
@@ -5,3 +5,4 @@
 fix-exception.NovaException.patch
 Add-context-switch-chance-to-other-thread-during-get_available_resources.patch
 Fix-neutron-client-dict-grabbing.patch
+OSSN-0094_restrict_swap_volume_to_cinder.patch
diff -Nru nova-31.0.0/debian/rules nova-31.0.0/debian/rules
--- nova-31.0.0/debian/rules	2025-07-12 11:35:02.000000000 +0200
+++ nova-31.0.0/debian/rules	2025-08-21 09:10:49.000000000 +0200
@@ -61,7 +61,9 @@
 ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
 	# Fails on buildd:
 	# db.main.test_api.ArchiveTestCase.test_archive_deleted_rows_task_log
-	pkgos-dh_auto_test --no-py2 'nova\.tests\.unit\.(?!(.*virt.libvirt\.test_driver\.LibvirtConnTestCase\.test_spawn_with_config_drive.*|.*test_wsgi\.TestWSGIServerWithSSL.*|.*test_hacking\.HackingTestCase.*|.*CreateInstanceTypeTest\.test_name_with_non_printable_characters.*|.*PatternPropertiesTestCase\.test_validate_patternProperties_fails.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_get_disk_xml.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_detach_volume_with_vir_domain_affect_live_flag.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_update_volume_xml.*|.*console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_tcp_rst_no_compute_rpcapi.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_rescue_with_config.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_stable_rescue_ide_cdrom.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_connect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_disconnect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_get_config.*|.*virt\.libvirt\.volume\.test_scaleio\.LibvirtScaleIOVolumeDriverTestCase.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_cross_cell_move_rbd_flatten_fetch_image_cache.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_blk_controller_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_valid_controller.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_rbd_image_flatten_during_fetch_image_cache.*|.*test_utils\.GenericUtilsTestCase\.test_temporary_chown.*|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect_3_slashes|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_open|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_write|notifications\.objects\.test_objects\.TestObjectVersions\.test_versions|objects\.test_objects\.TestObjectVersions\.test_versions|notifications\.objects\.test_notification\.TestNotificationObjectVersions\.test_versions|db\.main\.test_api\.ArchiveTestCase\.test_archive_deleted_rows_task_log|db\.main\.test_api\.UnsupportedDbRegexpTestCase\.test_instance_get_all_by_filters_sort_keys))'
+	# Non-deterministic (see: https://bugs.launchpad.net/nova/+bug/2121125):
+	# nova.tests.unit.compute.test_compute.ComputeTestCase.test_add_remove_fixed_ip_updates_instance_updated_at
+	pkgos-dh_auto_test --no-py2 'nova\.tests\.unit\.(?!(.*virt.libvirt\.test_driver\.LibvirtConnTestCase\.test_spawn_with_config_drive.*|.*test_wsgi\.TestWSGIServerWithSSL.*|.*test_hacking\.HackingTestCase.*|.*CreateInstanceTypeTest\.test_name_with_non_printable_characters.*|.*PatternPropertiesTestCase\.test_validate_patternProperties_fails.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_get_disk_xml.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_detach_volume_with_vir_domain_affect_live_flag.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_update_volume_xml.*|.*console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_tcp_rst_no_compute_rpcapi.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_rescue_with_config.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_stable_rescue_ide_cdrom.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_connect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_disconnect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_get_config.*|.*virt\.libvirt\.volume\.test_scaleio\.LibvirtScaleIOVolumeDriverTestCase.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_cross_cell_move_rbd_flatten_fetch_image_cache.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_blk_controller_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_valid_controller.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_rbd_image_flatten_during_fetch_image_cache.*|.*test_utils\.GenericUtilsTestCase\.test_temporary_chown.*|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect_3_slashes|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_open|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_write|notifications\.objects\.test_objects\.TestObjectVersions\.test_versions|objects\.test_objects\.TestObjectVersions\.test_versions|notifications\.objects\.test_notification\.TestNotificationObjectVersions\.test_versions|db\.main\.test_api\.ArchiveTestCase\.test_archive_deleted_rows_task_log|db\.main\.test_api\.UnsupportedDbRegexpTestCase\.test_instance_get_all_by_filters_sort_keys|nova\.tests\.unit\.compute\.test_compute\.ComputeTestCase\.test_add_remove_fixed_ip_updates_instance_updated_at))'
 endif
 
 	rm -rf $(CURDIR)/debian/tmp/usr/etc
diff -Nru nova-31.0.0/debian/tests/unittests nova-31.0.0/debian/tests/unittests
--- nova-31.0.0/debian/tests/unittests	2025-07-12 11:35:02.000000000 +0200
+++ nova-31.0.0/debian/tests/unittests	2025-08-21 09:10:49.000000000 +0200
@@ -2,4 +2,4 @@
 
 set -e
 
-pkgos-dh_auto_test --no-py2 'nova\.tests\.unit\.(?!(.*virt.libvirt\.test_driver\.LibvirtConnTestCase\.test_spawn_with_config_drive.*|.*test_wsgi\.TestWSGIServerWithSSL.*|.*test_hacking\.HackingTestCase.*|.*CreateInstanceTypeTest\.test_name_with_non_printable_characters.*|.*PatternPropertiesTestCase\.test_validate_patternProperties_fails.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_get_disk_xml.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_detach_volume_with_vir_domain_affect_live_flag.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_update_volume_xml.*|.*console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_tcp_rst_no_compute_rpcapi.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_rescue_with_config.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_stable_rescue_ide_cdrom.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_connect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_disconnect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_get_config.*|.*virt\.libvirt\.volume\.test_scaleio\.LibvirtScaleIOVolumeDriverTestCase.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_cross_cell_move_rbd_flatten_fetch_image_cache.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_blk_controller_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_valid_controller.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_rbd_image_flatten_during_fetch_image_cache.*|.*test_utils\.GenericUtilsTestCase\.test_temporary_chown.*|.*virt\.libvirt\.volume\.test_vzstorage\.LibvirtVZStorageTestCase.*|.*compute\.test_virtapi\.ComputeVirtAPITest\.test_wait_for_instance_event_one_received_one_timed_out.*|.*virt\.libvirt\.volume\.test_iser\.LibvirtISERVolumeDriverTestCase\.test_get_transport.*|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect_3_slashes|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_open|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_write|notifications\.objects\.test_objects\.TestObjectVersions\.test_versions|objects\.test_objects\.TestObjectVersions\.test_versions|notifications\.objects\.test_notification\.TestNotificationObjectVersions\.test_versions|db\.main\.test_api\.ArchiveTestCase\.test_archive_deleted_rows_task_log|db\.main\.test_api\.UnsupportedDbRegexpTestCase\.test_instance_get_all_by_filters_sort_keys))'
+pkgos-dh_auto_test --no-py2 'nova\.tests\.unit\.(?!(.*virt.libvirt\.test_driver\.LibvirtConnTestCase\.test_spawn_with_config_drive.*|.*test_wsgi\.TestWSGIServerWithSSL.*|.*test_hacking\.HackingTestCase.*|.*CreateInstanceTypeTest\.test_name_with_non_printable_characters.*|.*PatternPropertiesTestCase\.test_validate_patternProperties_fails.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_get_disk_xml.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_detach_volume_with_vir_domain_affect_live_flag.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_update_volume_xml.*|.*console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_tcp_rst_no_compute_rpcapi.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_rescue_with_config.*|.*virt\.libvirt\.test_blockinfo\.LibvirtBlockInfoTest\.test_get_disk_mapping_stable_rescue_ide_cdrom.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_connect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_disconnect.*|.*virt\.libvirt\.volume\.test_nvme\.LibvirtNVMEVolumeDriverTestCase\.test_libvirt_nvme_driver_get_config.*|.*virt\.libvirt\.volume\.test_scaleio\.LibvirtScaleIOVolumeDriverTestCase.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_cross_cell_move_rbd_flatten_fetch_image_cache.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_blk_controller_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_no_unmap.*|.*virt\.libvirt\.test_driver\.LibvirtConnTestCase\.test_check_discard_for_attach_volume_valid_controller.*|.*virt\.libvirt\.test_driver\.LibvirtDriverTestCase\.test_rbd_image_flatten_during_fetch_image_cache.*|.*test_utils\.GenericUtilsTestCase\.test_temporary_chown.*|.*virt\.libvirt\.volume\.test_vzstorage\.LibvirtVZStorageTestCase.*|.*compute\.test_virtapi\.ComputeVirtAPITest\.test_wait_for_instance_event_one_received_one_timed_out.*|.*virt\.libvirt\.volume\.test_iser\.LibvirtISERVolumeDriverTestCase\.test_get_transport.*|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect|console\.test_websocketproxy\.NovaProxyRequestHandlerTestCase\.test_reject_open_redirect_3_slashes|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_open|privsep\.test_utils\.SupportDirectIOTestCase\.test_supports_direct_io_with_exception_in_write|notifications\.objects\.test_objects\.TestObjectVersions\.test_versions|objects\.test_objects\.TestObjectVersions\.test_versions|notifications\.objects\.test_notification\.TestNotificationObjectVersions\.test_versions|db\.main\.test_api\.ArchiveTestCase\.test_archive_deleted_rows_task_log|db\.main\.test_api\.UnsupportedDbRegexpTestCase\.test_instance_get_all_by_filters_sort_keys|nova\.tests\.unit\.compute\.test_compute\.ComputeTestCase\.test_add_remove_fixed_ip_updates_instance_updated_at))'

--- End Message ---
--- Begin Message ---
Package: release.debian.org
Version: 13.2

Hi,

The updates referenced in each of these bugs were included in today's
13.2 trixie point release.

Regards,

Adam

--- End Message ---

Reply to: