[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#987267: marked as done (unblock: neutron/17.1.1-4)



Your message dated Wed, 28 Apr 2021 20:18:41 +0000
with message-id <E1lbqdt-0007EN-LG@respighi.debian.org>
and subject line unblock neutron
has caused the Debian Bug report #987267,
regarding unblock: neutron/17.1.1-4
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
987267: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=987267
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock

Please unblock package neutron

I've packaged the latest point release upstream, because it addresses
some important issues. Here's a link to the release notes for this
point release:

http://lists.openstack.org/pipermail/release-announce/2021-March/010755.html

I'm not particularly interested in one commit in particular, just the
set of all bugfixes described (ie: lost of small annoying bug fixes).

On top of this, the package fixes RC bug #987196. Sorry that it took
me 3 Debian version of the package to get the fix right this afternoon. :/
However, this is completely fixed, as I've tested a fresh compute node
install with version 17.1.1-3.

Debdiff from 17.1.0-2 to 17.1.1-3 attached.

Cheers,

Thomas Goirand (zigo)
diff -Nru neutron-17.1.0/debian/changelog neutron-17.1.1/debian/changelog
--- neutron-17.1.0/debian/changelog	2021-03-15 21:18:42.000000000 +0100
+++ neutron-17.1.1/debian/changelog	2021-04-20 18:59:02.000000000 +0200
@@ -1,3 +1,28 @@
+neutron (2:17.1.1-3) unstable; urgency=medium
+
+  * Remove previous (wrong) fix and correctly generate metadata_agent.ini in
+    the /usr/share/neutron-metadata-agent/ folder of the neutron-metadata-agent
+    package.
+
+ -- Thomas Goirand <zigo@debian.org>  Tue, 20 Apr 2021 18:59:02 +0200
+
+neutron (2:17.1.1-2) unstable; urgency=medium
+
+  * Add missing "pkgos_write_new_conf neutron metadata_agent.ini" in the
+    neutron-metadata-agent.postinst.
+
+ -- Thomas Goirand <zigo@debian.org>  Tue, 20 Apr 2021 17:44:05 +0200
+
+neutron (2:17.1.1-1) unstable; urgency=medium
+
+  * Tune neutron-api-uwsgi.ini for performance.
+  * New upstream release.
+  * neutron-common: do not manage metadata_agent.ini, and let the
+    neutron-metadata-agent package do it. Thanks to Andreas Beckmann for the
+    bug report. (Closes: #987196).
+
+ -- Thomas Goirand <zigo@debian.org>  Tue, 20 Apr 2021 12:31:47 +0200
+
 neutron (2:17.1.0-2) unstable; urgency=medium
 
   * Add Breaks: python3-neutron-fwaas (Closes: #985293).
diff -Nru neutron-17.1.0/debian/neutron-api-uwsgi.ini neutron-17.1.1/debian/neutron-api-uwsgi.ini
--- neutron-17.1.0/debian/neutron-api-uwsgi.ini	2021-03-15 21:18:42.000000000 +0100
+++ neutron-17.1.1/debian/neutron-api-uwsgi.ini	2021-04-20 18:59:02.000000000 +0200
@@ -12,11 +12,6 @@
 # This is running standalone
 master = true
 
-# Threads and processes
-enable-threads = true
-
-processes = 8
-
 # uwsgi recommends this to prevent thundering herd on accept.
 thunder-lock = true
 
@@ -37,6 +32,24 @@
 # exit instead of brutal reload on SIGTERM
 die-on-term = true
 
+##########################
+### Performance tuning ###
+##########################
+# Threads and processes
+enable-threads = true
+
+# For max perf, set this to number of core*2
+processes = 8
+
+# This was benchmarked as a good value
+threads = 32
+
+# This is the number of sockets in the queue.
+# It improves a lot performances. This is comparable
+# to the Apache ServerLimit/MaxClients option.
+listen = 100
+
+
 ##################################
 ### OpenStack service specific ###
 ##################################
diff -Nru neutron-17.1.0/debian/neutron-common.postinst.in neutron-17.1.1/debian/neutron-common.postinst.in
--- neutron-17.1.0/debian/neutron-common.postinst.in	2021-03-15 21:18:42.000000000 +0100
+++ neutron-17.1.1/debian/neutron-common.postinst.in	2021-04-20 18:59:02.000000000 +0200
@@ -45,7 +45,6 @@
 	# Agents:
 	pkgos_write_new_conf neutron dhcp_agent.ini
 	pkgos_write_new_conf neutron l3_agent.ini
-	pkgos_write_new_conf neutron metadata_agent.ini
         # As pkgos_write_new_conf doesn't support different path
         # let's workaround installation of config
 	if [ ! -e /etc/neutron/plugins/ml2/openvswitch_agent.ini ] ; then
diff -Nru neutron-17.1.0/debian/neutron-common.postrm.in neutron-17.1.1/debian/neutron-common.postrm.in
--- neutron-17.1.0/debian/neutron-common.postrm.in	2021-03-15 21:18:42.000000000 +0100
+++ neutron-17.1.1/debian/neutron-common.postrm.in	2021-04-20 18:59:02.000000000 +0200
@@ -18,7 +18,6 @@
 	rm -f /etc/neutron/dhcp_agent.ini
 	rm -f /etc/neutron/plugins/ml2/openvswitch_agent.ini
 	rm -f /etc/neutron/plugins/ml2/ml2_conf.ini
-	rm -f /etc/neutron/metadata_agent.ini
 
 	[ -d /etc/neutron/plugins/ml2 ] 	&& rmdir --ignore-fail-on-non-empty /etc/neutron/plugins/ml2
 	[ -d /etc/neutron/plugins ] 		&& rmdir --ignore-fail-on-non-empty /etc/neutron/plugins
diff -Nru neutron-17.1.0/debian/rules neutron-17.1.1/debian/rules
--- neutron-17.1.0/debian/rules	2021-03-15 21:18:42.000000000 +0100
+++ neutron-17.1.1/debian/rules	2021-04-20 18:59:02.000000000 +0200
@@ -124,9 +124,9 @@
 		--namespace oslo.log
 
 	# ml2_conf.ini
-	mkdir -p $(CURDIR)/debian/neutron-common/usr/share/neutron-common/plugins/ml2
+	mkdir -p $(CURDIR)/debian/neutron-metadata-agent/usr/share/neutron-metadata-agent
 	PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages oslo-config-generator \
-		--output-file $(CURDIR)/debian/neutron-common/usr/share/neutron-common/plugins/ml2/ml2_conf.ini \
+		--output-file $(CURDIR)/debian/neutron-metadata-agent/usr/share/neutron-metadata-agent/metadata_agent.ini \
 		--wrap-width 140 \
 		--namespace neutron.ml2 \
 		--namespace oslo.log
diff -Nru neutron-17.1.0/doc/source/contributor/internals/openvswitch_firewall.rst neutron-17.1.1/doc/source/contributor/internals/openvswitch_firewall.rst
--- neutron-17.1.0/doc/source/contributor/internals/openvswitch_firewall.rst	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/doc/source/contributor/internals/openvswitch_firewall.rst	2021-03-13 02:26:48.000000000 +0100
@@ -245,16 +245,16 @@
 
 ::
 
- table=71, priority=95,icmp6,reg5=0x1,in_port=1,icmp_type=130 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x1,in_port=1,icmp_type=131 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x1,in_port=1,icmp_type=132 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x1,in_port=1,icmp_type=135 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x1,in_port=1,icmp_type=136 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x2,in_port=2,icmp_type=130 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x2,in_port=2,icmp_type=131 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x2,in_port=2,icmp_type=132 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x2,in_port=2,icmp_type=135 actions=resubmit(,94)
- table=71, priority=95,icmp6,reg5=0x2,in_port=2,icmp_type=136 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,ipv6_src=fe80::11,icmp_type=130 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,ipv6_src=fe80::11,icmp_type=131 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,ipv6_src=fe80::11,icmp_type=132 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,ipv6_src=fe80::11,icmp_type=135 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,ipv6_src=fe80::11,icmp_type=136 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,ipv6_src=fe80::22,icmp_type=130 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,ipv6_src=fe80::22,icmp_type=131 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,ipv6_src=fe80::22,icmp_type=132 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,ipv6_src=fe80::22,icmp_type=135 actions=resubmit(,94)
+ table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,ipv6_src=fe80::22,icmp_type=136 actions=resubmit(,94)
 
 Following rules implement ARP spoofing protection
 
diff -Nru neutron-17.1.0/neutron/agent/common/ovs_lib.py neutron-17.1.1/neutron/agent/common/ovs_lib.py
--- neutron-17.1.0/neutron/agent/common/ovs_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/common/ovs_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -528,7 +528,7 @@
         if tunnel_type == TYPE_GRE_IP6:
             # NOTE(slaweq) According to the OVS documentation L3 GRE tunnels
             # over IPv6 are not supported.
-            options['packet_type'] = 'legacy'
+            options['packet_type'] = 'legacy_l2'
         attrs.append(('options', options))
 
         return self.add_port(port_name, *attrs)
diff -Nru neutron-17.1.0/neutron/agent/dhcp/agent.py neutron-17.1.1/neutron/agent/dhcp/agent.py
--- neutron-17.1.0/neutron/agent/dhcp/agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/dhcp/agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -27,6 +27,7 @@
 from neutron_lib import rpc as n_rpc
 from oslo_concurrency import lockutils
 from oslo_config import cfg
+from oslo_log import helpers as log_helpers
 from oslo_log import log as logging
 import oslo_messaging
 from oslo_service import loopingcall
@@ -74,6 +75,35 @@
     return wrapped
 
 
+class DHCPResourceUpdate(queue.ResourceUpdate):
+
+    def __init__(self, _id, priority, action=None, resource=None,
+                 timestamp=None, tries=5, obj_type=None):
+        super().__init__(_id, priority, action=action, resource=resource,
+                         timestamp=timestamp, tries=tries)
+        self.obj_type = obj_type
+
+    def __lt__(self, other):
+        if other.obj_type == self.obj_type == 'port':
+            # NOTE(ralonsoh): both resources should have "fixed_ips"
+            # information. That key was added to the deleted ports in this
+            # patch but this code runs in the Neutron API (server). Both the
+            # server and the DHCP agent should be updated.
+            # This check could be removed in Y release.
+            if ('fixed_ips' not in self.resource or
+                    'fixed_ips' not in other.resource):
+                return super().__lt__(other)
+
+            self_ips = set(str(fixed_ip['ip_address']) for
+                           fixed_ip in self.resource['fixed_ips'])
+            other_ips = set(str(fixed_ip['ip_address']) for
+                            fixed_ip in other.resource['fixed_ips'])
+            if self_ips & other_ips:
+                return self.timestamp < other.timestamp
+
+        return super().__lt__(other)
+
+
 class DhcpAgent(manager.Manager):
     """DHCP agent service manager.
 
@@ -445,28 +475,28 @@
 
     def network_create_end(self, context, payload):
         """Handle the network.create.end notification event."""
-        update = queue.ResourceUpdate(payload['network']['id'],
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_network_create',
-                                      resource=payload)
+        update = DHCPResourceUpdate(payload['network']['id'],
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_network_create',
+                                    resource=payload, obj_type='network')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _network_create(self, payload):
         network_id = payload['network']['id']
         self.enable_dhcp_helper(network_id)
 
     def network_update_end(self, context, payload):
         """Handle the network.update.end notification event."""
-        update = queue.ResourceUpdate(payload['network']['id'],
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_network_update',
-                                      resource=payload)
+        update = DHCPResourceUpdate(payload['network']['id'],
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_network_update',
+                                    resource=payload, obj_type='network')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _network_update(self, payload):
         network_id = payload['network']['id']
         if payload['network']['admin_state_up']:
@@ -476,28 +506,28 @@
 
     def network_delete_end(self, context, payload):
         """Handle the network.delete.end notification event."""
-        update = queue.ResourceUpdate(payload['network_id'],
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_network_delete',
-                                      resource=payload)
+        update = DHCPResourceUpdate(payload['network_id'],
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_network_delete',
+                                    resource=payload, obj_type='network')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _network_delete(self, payload):
         network_id = payload['network_id']
         self.disable_dhcp_helper(network_id)
 
     def subnet_update_end(self, context, payload):
         """Handle the subnet.update.end notification event."""
-        update = queue.ResourceUpdate(payload['subnet']['network_id'],
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_subnet_update',
-                                      resource=payload)
+        update = DHCPResourceUpdate(payload['subnet']['network_id'],
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_subnet_update',
+                                    resource=payload, obj_type='subnet')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _subnet_update(self, payload):
         network_id = payload['subnet']['network_id']
         self.refresh_dhcp_helper(network_id)
@@ -528,14 +558,14 @@
         network_id = self._get_network_lock_id(payload)
         if not network_id:
             return
-        update = queue.ResourceUpdate(network_id,
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_subnet_delete',
-                                      resource=payload)
+        update = DHCPResourceUpdate(network_id,
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_subnet_delete',
+                                    resource=payload, obj_type='subnet')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _subnet_delete(self, payload):
         network_id = self._get_network_lock_id(payload)
         if not network_id:
@@ -572,17 +602,19 @@
     def port_update_end(self, context, payload):
         """Handle the port.update.end notification event."""
         updated_port = dhcp.DictModel(payload['port'])
+        if not dhcp.port_requires_dhcp_configuration(updated_port):
+            return
         if self.cache.is_port_message_stale(updated_port):
             LOG.debug("Discarding stale port update: %s", updated_port)
             return
-        update = queue.ResourceUpdate(updated_port.network_id,
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_port_update',
-                                      resource=updated_port)
+        update = DHCPResourceUpdate(updated_port.network_id,
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_port_update',
+                                    resource=updated_port, obj_type='port')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _port_update(self, updated_port):
         if self.cache.is_port_message_stale(updated_port):
             LOG.debug("Discarding stale port update: %s", updated_port)
@@ -594,7 +626,10 @@
         self.reload_allocations(updated_port, network, prio=True)
 
     def reload_allocations(self, port, network, prio=False):
-        LOG.info("Trigger reload_allocations for port %s", port)
+        LOG.info("Trigger reload_allocations for port %s on network %s",
+                 port, network)
+        if not dhcp.port_requires_dhcp_configuration(port):
+            return
         driver_action = 'reload_allocations'
         if self._is_port_on_this_agent(port):
             orig = self.cache.get_port_by_id(port['id'])
@@ -633,14 +668,16 @@
     def port_create_end(self, context, payload):
         """Handle the port.create.end notification event."""
         created_port = dhcp.DictModel(payload['port'])
-        update = queue.ResourceUpdate(created_port.network_id,
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_port_create',
-                                      resource=created_port)
+        if not dhcp.port_requires_dhcp_configuration(created_port):
+            return
+        update = DHCPResourceUpdate(created_port.network_id,
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_port_create',
+                                    resource=created_port, obj_type='port')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _port_create(self, created_port):
         network = self.cache.get_network_by_id(created_port.network_id)
         if not network:
@@ -655,9 +692,18 @@
             if (new_ips.intersection(cached_ips) and
                 (created_port['id'] != port_cached['id'] or
                  created_port['mac_address'] != port_cached['mac_address'])):
-                self.schedule_resync("Duplicate IP addresses found, "
-                                     "DHCP cache is out of sync",
-                                     created_port.network_id)
+                resync_reason = (
+                    "Duplicate IP addresses found, "
+                    "Port in cache: {cache_port_id}, "
+                    "Created port: {port_id}, "
+                    "IPs in cache: {cached_ips}, "
+                    "new IPs: {new_ips}."
+                    "DHCP cache is out of sync").format(
+                        cache_port_id=port_cached['id'],
+                        port_id=created_port['id'],
+                        cached_ips=cached_ips,
+                        new_ips=new_ips)
+                self.schedule_resync(resync_reason, created_port.network_id)
                 return
         self.reload_allocations(created_port, network, prio=True)
 
@@ -666,14 +712,14 @@
         network_id = self._get_network_lock_id(payload)
         if not network_id:
             return
-        update = queue.ResourceUpdate(network_id,
-                                      payload.get('priority',
-                                                  DEFAULT_PRIORITY),
-                                      action='_port_delete',
-                                      resource=payload)
+        update = DHCPResourceUpdate(network_id,
+                                    payload.get('priority', DEFAULT_PRIORITY),
+                                    action='_port_delete',
+                                    resource=payload, obj_type='port')
         self._queue.add(update)
 
     @_wait_if_syncing
+    @log_helpers.log_method_call
     def _port_delete(self, payload):
         network_id = self._get_network_lock_id(payload)
         if not network_id:
diff -Nru neutron-17.1.0/neutron/agent/l3/agent.py neutron-17.1.1/neutron/agent/l3/agent.py
--- neutron-17.1.0/neutron/agent/l3/agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/l3/agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -450,7 +450,6 @@
 
         if router.get('ha'):
             features.append('ha')
-            kwargs['state_change_callback'] = self.enqueue_state_change
 
         if router.get('distributed') and router.get('ha'):
             # Case 1: If the router contains information about the HA interface
@@ -465,7 +464,6 @@
             if (not router.get(lib_const.HA_INTERFACE_KEY) or
                     self.conf.agent_mode != lib_const.L3_AGENT_MODE_DVR_SNAT):
                 features.remove('ha')
-                kwargs.pop('state_change_callback')
 
         return self.router_factory.create(features, **kwargs)
 
diff -Nru neutron-17.1.0/neutron/agent/l3/dvr_edge_router.py neutron-17.1.1/neutron/agent/l3/dvr_edge_router.py
--- neutron-17.1.0/neutron/agent/l3/dvr_edge_router.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/l3/dvr_edge_router.py	2021-03-13 02:26:48.000000000 +0100
@@ -71,8 +71,10 @@
             if self.snat_namespace.exists():
                 LOG.debug("SNAT was rescheduled to host %s. Clearing snat "
                           "namespace.", self.router.get('gw_port_host'))
-                return self.external_gateway_removed(
-                    ex_gw_port, interface_name)
+                self.driver.unplug(interface_name,
+                                   namespace=self.snat_namespace.name,
+                                   prefix=router.EXTERNAL_DEV_PREFIX)
+                self.snat_namespace.delete()
             return
 
         if not self.snat_namespace.exists():
@@ -185,8 +187,8 @@
         # TODO(mlavalle): in the near future, this method should contain the
         # code in the L3 agent that creates a gateway for a dvr. The first step
         # is to move the creation of the snat namespace here
-        self.snat_namespace.create()
-        return self.snat_namespace
+        if self._is_this_snat_host():
+            self.snat_namespace.create()
 
     def _get_snat_int_device_name(self, port_id):
         long_name = lib_constants.SNAT_INT_DEV_PREFIX + port_id
diff -Nru neutron-17.1.0/neutron/agent/l3/ha.py neutron-17.1.1/neutron/agent/l3/ha.py
--- neutron-17.1.0/neutron/agent/l3/ha.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/l3/ha.py	2021-03-13 02:26:48.000000000 +0100
@@ -20,6 +20,7 @@
 from neutron_lib import constants
 from oslo_log import log as logging
 from oslo_utils import fileutils
+from oslo_utils import netutils
 import webob
 
 from neutron.agent.linux import utils as agent_utils
@@ -217,9 +218,12 @@
         # routers needs to serve metadata requests to local ports.
         if state == 'primary' or ri.router.get('distributed', False):
             LOG.debug('Spawning metadata proxy for router %s', router_id)
+            spawn_kwargs = {}
+            if netutils.is_ipv6_enabled():
+                spawn_kwargs['bind_address'] = '::'
             self.metadata_driver.spawn_monitored_metadata_proxy(
                 self.process_monitor, ri.ns_name, self.conf.metadata_port,
-                self.conf, router_id=ri.router_id)
+                self.conf, router_id=ri.router_id, **spawn_kwargs)
         else:
             LOG.debug('Closing metadata proxy for router %s', router_id)
             self.metadata_driver.destroy_monitored_metadata_proxy(
diff -Nru neutron-17.1.0/neutron/agent/l3/ha_router.py neutron-17.1.1/neutron/agent/l3/ha_router.py
--- neutron-17.1.0/neutron/agent/l3/ha_router.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/l3/ha_router.py	2021-03-13 02:26:48.000000000 +0100
@@ -66,12 +66,11 @@
 
 
 class HaRouter(router.RouterInfo):
-    def __init__(self, state_change_callback, *args, **kwargs):
+    def __init__(self, *args, **kwargs):
         super(HaRouter, self).__init__(*args, **kwargs)
 
         self.ha_port = None
         self.keepalived_manager = None
-        self.state_change_callback = state_change_callback
         self._ha_state = None
         self._ha_state_path = None
 
@@ -156,7 +155,6 @@
         self._init_keepalived_manager(process_monitor)
         self._check_and_set_real_state()
         self.ha_network_added()
-        self.update_initial_state(self.state_change_callback)
         self.spawn_state_change_monitor(process_monitor)
 
     def _init_keepalived_manager(self, process_monitor):
@@ -449,15 +447,6 @@
         except common_utils.WaitTimeout:
             pm.disable(sig=str(int(signal.SIGKILL)))
 
-    def update_initial_state(self, callback):
-        addresses = ip_lib.get_devices_with_ip(self.ha_namespace,
-                                               name=self.get_ha_device_name())
-        cidrs = (address['cidr'] for address in addresses)
-        ha_cidr = self._get_primary_vip()
-        state = 'primary' if ha_cidr in cidrs else 'backup'
-        self.ha_state = state
-        callback(self.router_id, state)
-
     @staticmethod
     def _gateway_ports_equal(port1, port2):
         def _get_filtered_dict(d, ignore):
@@ -553,8 +542,11 @@
         if ex_gw_port_id:
             interface_name = self.get_external_device_name(ex_gw_port_id)
             ns_name = self.get_gw_ns_name()
-            self.driver.set_link_status(interface_name, ns_name,
-                                        link_up=link_up)
+            if (not self.driver.set_link_status(
+                    interface_name, namespace=ns_name, link_up=link_up) and
+                    link_up):
+                LOG.error('Gateway interface for router %s was not set up; '
+                          'router will not work properly', self.router_id)
             if link_up and set_gw:
                 preserve_ips = self.get_router_preserve_ips()
                 self._external_gateway_settings(ex_gw_port, interface_name,
diff -Nru neutron-17.1.0/neutron/agent/l3/keepalived_state_change.py neutron-17.1.1/neutron/agent/l3/keepalived_state_change.py
--- neutron-17.1.0/neutron/agent/l3/keepalived_state_change.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/l3/keepalived_state_change.py	2021-03-13 02:26:48.000000000 +0100
@@ -107,12 +107,12 @@
             for address in ip.addr.list():
                 if address.get('cidr') == self.cidr:
                     state = 'primary'
-                    self.write_state_change(state)
-                    self.notify_agent(state)
                     break
 
             LOG.debug('Initial status of router %s is %s',
                       self.router_id, state)
+            self.write_state_change(state)
+            self.notify_agent(state)
         except Exception:
             LOG.exception('Failed to get initial status of router %s',
                           self.router_id)
diff -Nru neutron-17.1.0/neutron/agent/linux/dhcp.py neutron-17.1.1/neutron/agent/linux/dhcp.py
--- neutron-17.1.0/neutron/agent/linux/dhcp.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/dhcp.py	2021-03-13 02:26:48.000000000 +0100
@@ -59,6 +59,23 @@
 DHCP_OPT_CLIENT_ID_NUM = 61
 
 
+def port_requires_dhcp_configuration(port):
+    if not getattr(port, 'device_owner', None):
+        # We can't check if port needs dhcp entry, so it will be better
+        # to create one
+        return True
+    # TODO(slaweq): define this list as a constant in neutron_lib.constants
+    # NOTE(slaweq): Not all port types which belongs e.g. to the routers can be
+    # excluded from that list. For some of them, like router interfaces used to
+    # plug subnet to the router should be configured in dnsmasq to provide DNS
+    # naming resolution. Otherwise it may slowdown e.g. traceroutes from the VM
+    return port.device_owner not in [
+        constants.DEVICE_OWNER_ROUTER_HA_INTF,
+        constants.DEVICE_OWNER_FLOATINGIP,
+        constants.DEVICE_OWNER_DHCP,
+        constants.DEVICE_OWNER_DISTRIBUTED]
+
+
 class DictModel(collections.abc.MutableMapping):
     """Convert dict into an object that provides attribute access to values."""
 
@@ -723,6 +740,9 @@
                        if subnet.ip_version == 6)
 
         for port in self.network.ports:
+            if not port_requires_dhcp_configuration(port):
+                continue
+
             fixed_ips = self._sort_fixed_ips_for_dnsmasq(port.fixed_ips,
                                                          v6_nets)
             # TODO(hjensas): Drop this conditional and option once distros
diff -Nru neutron-17.1.0/neutron/agent/linux/interface.py neutron-17.1.1/neutron/agent/linux/interface.py
--- neutron-17.1.0/neutron/agent/linux/interface.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/interface.py	2021-03-13 02:26:48.000000000 +0100
@@ -322,14 +322,20 @@
 
     def set_link_status(self, device_name, namespace=None, link_up=True):
         ns_dev = ip_lib.IPWrapper(namespace=namespace).device(device_name)
-        if not ns_dev.exists():
-            LOG.debug("Device %s may concurrently be deleted.", device_name)
-            return
+        try:
+            utils.wait_until_true(ns_dev.exists, timeout=3)
+        except utils.WaitTimeout:
+            LOG.debug('Device %s may have been deleted concurrently',
+                      device_name)
+            return False
+
         if link_up:
             ns_dev.link.set_up()
         else:
             ns_dev.link.set_down()
 
+        return True
+
 
 class NullDriver(LinuxInterfaceDriver):
     def plug_new(self, network_id, port_id, device_name, mac_address,
diff -Nru neutron-17.1.0/neutron/agent/linux/ip_conntrack.py neutron-17.1.1/neutron/agent/linux/ip_conntrack.py
--- neutron-17.1.0/neutron/agent/linux/ip_conntrack.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/ip_conntrack.py	2021-03-13 02:26:48.000000000 +0100
@@ -116,6 +116,7 @@
         ethertype = rule.get('ethertype')
         protocol = rule.get('protocol')
         direction = rule.get('direction')
+        mark = rule.get('mark')
         cmd = ['conntrack', '-D']
         if protocol is not None:
             # 0 is IP in /etc/protocols, but conntrack will throw an error
@@ -123,6 +124,8 @@
                 protocol = 'ip'
             cmd.extend(['-p', str(protocol)])
         cmd.extend(['-f', str(ethertype).lower()])
+        if mark is not None:
+            cmd.extend(['-m', str(mark)])
         cmd.append('-d' if direction == 'ingress' else '-s')
         cmd_ns = []
         if namespace:
@@ -173,10 +176,12 @@
         self._process(device_info_list, rule)
 
     def delete_conntrack_state_by_remote_ips(self, device_info_list,
-                                             ethertype, remote_ips):
+                                             ethertype, remote_ips, mark=None):
         for direction in ['ingress', 'egress']:
             rule = {'ethertype': str(ethertype).lower(),
                     'direction': direction}
+            if mark:
+                rule['mark'] = mark
             self._process(device_info_list, rule, remote_ips)
 
     def _populate_initial_zone_map(self):
@@ -254,3 +259,21 @@
                 return index + ZONE_START
         # conntrack zones exhausted :( :(
         raise exceptions.CTZoneExhaustedError()
+
+
+class OvsIpConntrackManager(IpConntrackManager):
+
+    def __init__(self, execute=None):
+        super(OvsIpConntrackManager, self).__init__(
+            get_rules_for_table_func=None,
+            filtered_ports={}, unfiltered_ports={},
+            execute=execute, namespace=None, zone_per_port=False)
+
+    def _populate_initial_zone_map(self):
+        self._device_zone_map = {}
+
+    def get_device_zone(self, port, create=False):
+        of_port = port.get('of_port')
+        if of_port is None:
+            return
+        return of_port.vlan_tag
diff -Nru neutron-17.1.0/neutron/agent/linux/ip_lib.py neutron-17.1.1/neutron/agent/linux/ip_lib.py
--- neutron-17.1.0/neutron/agent/linux/ip_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/ip_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -1340,35 +1340,37 @@
             'event': event}
 
 
-def _parse_link_device(namespace, device, **kwargs):
-    """Parse pytoute2 link device information
-
-    For each link device, the IP address information is retrieved and returned
-    in a dictionary.
-    IP address scope: http://linux-ip.net/html/tools-ip-address.html
-    """
-    retval = []
-    name = get_attr(device, 'IFLA_IFNAME')
-    ip_addresses = privileged.get_ip_addresses(namespace,
-                                               index=device['index'],
-                                               **kwargs)
-    for ip_address in ip_addresses:
-        retval.append(_parse_ip_address(ip_address, name))
-    return retval
-
-
 def get_devices_with_ip(namespace, name=None, **kwargs):
+    retval = []
     link_args = {}
     if name:
         link_args['ifname'] = name
     scope = kwargs.pop('scope', None)
     if scope:
         kwargs['scope'] = IP_ADDRESS_SCOPE_NAME[scope]
-    devices = privileged.get_link_devices(namespace, **link_args)
-    retval = []
-    for parsed_ips in (_parse_link_device(namespace, device, **kwargs)
-                       for device in devices):
-        retval += parsed_ips
+
+    if not link_args:
+        ip_addresses = privileged.get_ip_addresses(namespace, **kwargs)
+    else:
+        device = get_devices_info(namespace, **link_args)
+        if not device:
+            return retval
+        ip_addresses = privileged.get_ip_addresses(
+            namespace, index=device[0]['index'], **kwargs)
+
+    devices = {}  # {device index: name}
+    for ip_address in ip_addresses:
+        index = ip_address['index']
+        name = get_attr(ip_address, 'IFA_LABEL') or devices.get(index)
+        if not name:
+            device = get_devices_info(namespace, index=index)
+            if not device:
+                continue
+            name = device[0]['name']
+
+        retval.append(_parse_ip_address(ip_address, name))
+        devices[index] = name
+
     return retval
 
 
diff -Nru neutron-17.1.0/neutron/agent/linux/iptables_manager.py neutron-17.1.1/neutron/agent/linux/iptables_manager.py
--- neutron-17.1.0/neutron/agent/linux/iptables_manager.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/iptables_manager.py	2021-03-13 02:26:48.000000000 +0100
@@ -308,7 +308,8 @@
     _random_fully = None
 
     def __init__(self, _execute=None, state_less=False, use_ipv6=False,
-                 nat=True, namespace=None, binary_name=binary_name):
+                 nat=True, namespace=None, binary_name=binary_name,
+                 external_lock=True):
         if _execute:
             self.execute = _execute
         else:
@@ -318,6 +319,7 @@
         self.namespace = namespace
         self.iptables_apply_deferred = False
         self.wrap_name = binary_name[:16]
+        self.external_lock = external_lock
 
         self.ipv4 = {'filter': IptablesTable(binary_name=self.wrap_name)}
         self.ipv6 = {'filter': IptablesTable(binary_name=self.wrap_name)}
@@ -463,7 +465,8 @@
         # NOTE(ihrachys) we may get rid of the lock once all supported
         # platforms get iptables with 999eaa241212d3952ddff39a99d0d55a74e3639e
         # ("iptables-restore: support acquiring the lock.")
-        with lockutils.lock(lock_name, runtime.SYNCHRONIZED_PREFIX, True):
+        with lockutils.lock(lock_name, runtime.SYNCHRONIZED_PREFIX,
+                            external=self.external_lock):
             first = self._apply_synchronized()
             if not cfg.CONF.AGENT.debug_iptables_rules:
                 return first
diff -Nru neutron-17.1.0/neutron/agent/linux/openvswitch_firewall/firewall.py neutron-17.1.1/neutron/agent/linux/openvswitch_firewall/firewall.py
--- neutron-17.1.0/neutron/agent/linux/openvswitch_firewall/firewall.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/linux/openvswitch_firewall/firewall.py	2021-03-13 02:26:48.000000000 +0100
@@ -32,6 +32,7 @@
 from neutron._i18n import _
 from neutron.agent.common import ovs_lib
 from neutron.agent import firewall
+from neutron.agent.linux import ip_conntrack
 from neutron.agent.linux.openvswitch_firewall import constants as ovsfw_consts
 from neutron.agent.linux.openvswitch_firewall import exceptions
 from neutron.agent.linux.openvswitch_firewall import iptables
@@ -476,13 +477,12 @@
         """
         self.permitted_ethertypes = cfg.CONF.SECURITYGROUP.permitted_ethertypes
         self.int_br = self.initialize_bridge(integration_bridge)
-        self.sg_port_map = SGPortMap()
-        self.conj_ip_manager = ConjIPFlowManager(self)
-        self.sg_to_delete = set()
+        self._initialize_sg()
         self._update_cookie = None
         self._deferred = False
         self.iptables_helper = iptables.Helper(self.int_br.br)
         self.iptables_helper.load_driver_if_needed()
+        self.ipconntrack = ip_conntrack.OvsIpConntrackManager()
         self._initialize_firewall()
 
         callbacks_registry.subscribe(
@@ -492,8 +492,14 @@
 
     def _init_firewall_callback(self, resource, event, trigger, payload=None):
         LOG.info("Reinitialize Openvswitch firewall after OVS restart.")
+        self._initialize_sg()
         self._initialize_firewall()
 
+    def _initialize_sg(self):
+        self.sg_port_map = SGPortMap()
+        self.conj_ip_manager = ConjIPFlowManager(self)
+        self.sg_to_delete = set()
+
     def _initialize_firewall(self):
         self._drop_all_unmatched_flows()
         self._initialize_common_flows()
@@ -608,6 +614,12 @@
         return get_physical_network_from_other_config(
             self.int_br.br, port_name)
 
+    def _delete_invalid_conntrack_entries_for_port(self, port, of_port):
+        port['of_port'] = of_port
+        for ethertype in [lib_const.IPv4, lib_const.IPv6]:
+            self.ipconntrack.delete_conntrack_state_by_remote_ips(
+                [port], ethertype, set(), mark=ovsfw_consts.CT_MARK_INVALID)
+
     def get_ofport(self, port):
         port_id = port['device']
         return self.sg_port_map.ports.get(port_id)
@@ -662,6 +674,7 @@
                 self._update_flows_for_port(of_port, old_of_port)
             else:
                 self._set_port_filters(of_port)
+            self._delete_invalid_conntrack_entries_for_port(port, of_port)
         except exceptions.OVSFWPortNotFound as not_found_error:
             LOG.info("port %(port_id)s does not exist in ovsdb: %(err)s.",
                      {'port_id': port['device'],
@@ -701,6 +714,8 @@
             else:
                 self._set_port_filters(of_port)
 
+            self._delete_invalid_conntrack_entries_for_port(port, of_port)
+
         except exceptions.OVSFWPortNotFound as not_found_error:
             LOG.info("port %(port_id)s does not exist in ovsdb: %(err)s.",
                      {'port_id': port['device'],
@@ -894,19 +909,24 @@
         self._initialize_egress(port)
         self._initialize_ingress(port)
 
-    def _initialize_egress_ipv6_icmp(self, port):
-        for icmp_type in firewall.ICMPV6_ALLOWED_EGRESS_TYPES:
-            self._add_flow(
-                table=ovs_consts.BASE_EGRESS_TABLE,
-                priority=95,
-                in_port=port.ofport,
-                reg_port=port.ofport,
-                dl_type=lib_const.ETHERTYPE_IPV6,
-                nw_proto=lib_const.PROTO_NUM_IPV6_ICMP,
-                icmp_type=icmp_type,
-                actions='resubmit(,%d)' % (
-                    ovs_consts.ACCEPTED_EGRESS_TRAFFIC_NORMAL_TABLE)
-            )
+    def _initialize_egress_ipv6_icmp(self, port, allowed_pairs):
+        # NOTE(slaweq): should we include also fe80::/64 (link-local) subnet
+        # in the allowed pairs here?
+        for mac_addr, ip_addr in allowed_pairs:
+            for icmp_type in firewall.ICMPV6_ALLOWED_EGRESS_TYPES:
+                self._add_flow(
+                    table=ovs_consts.BASE_EGRESS_TABLE,
+                    priority=95,
+                    in_port=port.ofport,
+                    reg_port=port.ofport,
+                    dl_type=lib_const.ETHERTYPE_IPV6,
+                    nw_proto=lib_const.PROTO_NUM_IPV6_ICMP,
+                    icmp_type=icmp_type,
+                    dl_src=mac_addr,
+                    ipv6_src=ip_addr,
+                    actions='resubmit(,%d)' % (
+                        ovs_consts.ACCEPTED_EGRESS_TRAFFIC_NORMAL_TABLE)
+                )
 
     def _initialize_egress_no_port_security(self, port_id, ovs_ports=None):
         try:
@@ -979,7 +999,6 @@
 
     def _initialize_egress(self, port):
         """Identify egress traffic and send it to egress base"""
-        self._initialize_egress_ipv6_icmp(port)
 
         # Apply mac/ip pairs for IPv4
         allowed_pairs = port.allowed_pairs_v4.union(
@@ -1012,6 +1031,7 @@
         # Apply mac/ip pairs for IPv6
         allowed_pairs = port.allowed_pairs_v6.union(
             {(port.mac, ip_addr) for ip_addr in port.ipv6_addresses})
+        self._initialize_egress_ipv6_icmp(port, allowed_pairs)
         for mac_addr, ip_addr in allowed_pairs:
             self._add_flow(
                 table=ovs_consts.BASE_EGRESS_TABLE,
diff -Nru neutron-17.1.0/neutron/agent/ovn/metadata/agent.py neutron-17.1.1/neutron/agent/ovn/metadata/agent.py
--- neutron-17.1.0/neutron/agent/ovn/metadata/agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/ovn/metadata/agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -16,20 +16,23 @@
 import functools
 import re
 
+from neutron_lib import constants as n_const
+from oslo_concurrency import lockutils
+from oslo_log import log
+from oslo_utils import uuidutils
+from ovsdbapp.backend.ovs_idl import event as row_event
+from ovsdbapp.backend.ovs_idl import vlog
+import tenacity
+
 from neutron.agent.linux import external_process
 from neutron.agent.linux import ip_lib
+from neutron.agent.linux import iptables_manager
 from neutron.agent.ovn.metadata import driver as metadata_driver
 from neutron.agent.ovn.metadata import ovsdb
 from neutron.agent.ovn.metadata import server as metadata_server
 from neutron.common.ovn import constants as ovn_const
 from neutron.common import utils
 from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf as config
-from neutron_lib import constants as n_const
-from oslo_concurrency import lockutils
-from oslo_log import log
-from oslo_utils import uuidutils
-from ovsdbapp.backend.ovs_idl import event as row_event
-from ovsdbapp.backend.ovs_idl import vlog
 
 
 LOG = log.getLogger(__name__)
@@ -248,6 +251,10 @@
 
         proxy.wait()
 
+    @tenacity.retry(
+        wait=tenacity.wait_exponential(
+            max=config.get_ovn_ovsdb_retry_max_interval()),
+        reraise=True)
     def register_metadata_agent(self):
         # NOTE(lucasagomes): db_add() will not overwrite the UUID if
         # it's already set.
@@ -361,6 +368,24 @@
         else:
             self.teardown_datapath(datapath)
 
+    def _ensure_datapath_checksum(self, namespace):
+        """Ensure the correct checksum in the metadata packets in DPDK bridges
+
+        (LP#1904871) In DPDK deployments (integration bridge datapath_type ==
+        "netdev"), the checksum between the metadata namespace and OVS is not
+        correctly populated.
+        """
+        if (self.ovs_idl.db_get(
+                'Bridge', self.ovn_bridge, 'datapath_type').execute() !=
+                ovn_const.CHASSIS_DATAPATH_NETDEV):
+            return
+
+        iptables_mgr = iptables_manager.IptablesManager(
+            use_ipv6=True, nat=False, namespace=namespace, external_lock=False)
+        rule = '-p tcp -m tcp -j CHECKSUM --checksum-fill'
+        iptables_mgr.ipv4['mangle'].add_rule('POSTROUTING', rule, wrap=False)
+        iptables_mgr.apply()
+
     def provision_datapath(self, datapath):
         """Provision the datapath so that it can serve metadata.
 
@@ -468,6 +493,9 @@
             'Interface', veth_name[0],
             ('external_ids', {'iface-id': port.logical_port})).execute()
 
+        # Ensure the correct checksum in the metadata traffic.
+        self._ensure_datapath_checksum(namespace)
+
         # Spawn metadata proxy if it's not already running.
         metadata_driver.MetadataDriver.spawn_monitored_metadata_proxy(
             self._process_monitor, namespace, n_const.METADATA_PORT,
diff -Nru neutron-17.1.0/neutron/agent/securitygroups_rpc.py neutron-17.1.1/neutron/agent/securitygroups_rpc.py
--- neutron-17.1.0/neutron/agent/securitygroups_rpc.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/agent/securitygroups_rpc.py	2021-03-13 02:26:48.000000000 +0100
@@ -18,6 +18,7 @@
 
 from neutron_lib.api.definitions import rbac_security_groups as rbac_sg_apidef
 from neutron_lib.api.definitions import stateful_security_group as stateful_sg
+from oslo_concurrency import lockutils
 from oslo_config import cfg
 from oslo_log import log as logging
 import oslo_messaging
@@ -62,6 +63,9 @@
         self.context = context
         self.plugin_rpc = plugin_rpc
         self.init_firewall(defer_refresh_firewall, integration_bridge)
+        # _latest_port_filter_lock will point to the lock created for the
+        # most recent thread to enter _apply_port_filters().
+        self._latest_port_filter_lock = lockutils.ReaderWriterLock()
 
     def _get_trusted_devices(self, device_ids, devices):
         trusted_devices = []
@@ -77,6 +81,27 @@
                 trusted_devices.append(device_id)
         return trusted_devices
 
+    def _port_filter_lock(func):
+        """Decorator to acquire a new lock while applying port filters"""
+        @functools.wraps(func)
+        def decorated_function(self, *args, **kwargs):
+            lock = lockutils.ReaderWriterLock()
+            # Tracking the most recent lock at the instance level allows
+            # waiters to only wait for the most recent lock to be released
+            # instead of waiting until all locks have been released.
+            self._latest_port_filter_lock = lock
+            with lock.write_lock():
+                return func(self, *args, **kwargs)
+        return decorated_function
+
+    def _port_filter_wait(func):
+        """Decorator to wait for the latest port filter lock to be released"""
+        @functools.wraps(func)
+        def decorated_function(self, *args, **kwargs):
+            with self._latest_port_filter_lock.read_lock():
+                return func(self, *args, **kwargs)
+        return decorated_function
+
     def init_firewall(self, defer_refresh_firewall=False,
                       integration_bridge=None):
         firewall_driver = cfg.CONF.SECURITYGROUP.firewall_driver or 'noop'
@@ -138,6 +163,7 @@
         LOG.info("Preparing filters for devices %s", device_ids)
         self._apply_port_filter(device_ids)
 
+    @_port_filter_lock
     def _apply_port_filter(self, device_ids, update_filter=False):
         step = common_constants.AGENT_RES_PROCESSING_STEP
         if self.use_enhanced_rpc:
@@ -195,6 +221,7 @@
             'security_group_source_groups',
             'sg_member')
 
+    @_port_filter_wait
     def _security_group_updated(self, security_groups, attribute, action_type):
         devices = []
         sec_grp_set = set(security_groups)
diff -Nru neutron-17.1.0/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py neutron-17.1.1/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py
--- neutron-17.1.0/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py	2021-03-13 02:26:48.000000000 +0100
@@ -268,7 +268,8 @@
     def _after_router_interface_deleted(self, resource, event, trigger,
                                         **kwargs):
         self._notify_agents(kwargs['context'], 'port_delete_end',
-                            {'port_id': kwargs['port']['id']},
+                            {'port_id': kwargs['port']['id'],
+                             'fixed_ips': kwargs['port']['fixed_ips']},
                             kwargs['port']['network_id'])
 
     def _native_event_send_dhcp_notification(self, resource, event, trigger,
@@ -343,6 +344,8 @@
                 payload = {obj_type + '_id': obj_value['id']}
                 if obj_type != 'network':
                     payload['network_id'] = network_id
+                if obj_type == 'port':
+                    payload['fixed_ips'] = obj_value['fixed_ips']
                 self._notify_agents(context, method_name, payload, network_id)
         else:
             self._notify_agents(context, method_name, data, network_id)
diff -Nru neutron-17.1.0/neutron/common/_constants.py neutron-17.1.1/neutron/common/_constants.py
--- neutron-17.1.0/neutron/common/_constants.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/common/_constants.py	2021-03-13 02:26:48.000000000 +0100
@@ -76,4 +76,5 @@
 # with these owners, it will allow subnet deletion to proceed with the
 # IP allocations being cleaned up by cascade.
 AUTO_DELETE_PORT_OWNERS = [constants.DEVICE_OWNER_DHCP,
-                           constants.DEVICE_OWNER_DISTRIBUTED]
+                           constants.DEVICE_OWNER_DISTRIBUTED,
+                           constants.DEVICE_OWNER_AGENT_GW]
diff -Nru neutron-17.1.0/neutron/common/ovn/constants.py neutron-17.1.1/neutron/common/ovn/constants.py
--- neutron-17.1.0/neutron/common/ovn/constants.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/common/ovn/constants.py	2021-03-13 02:26:48.000000000 +0100
@@ -274,6 +274,8 @@
 LSP_TYPE_EXTERNAL = 'external'
 LSP_OPTIONS_VIRTUAL_PARENTS_KEY = 'virtual-parents'
 LSP_OPTIONS_VIRTUAL_IP_KEY = 'virtual-ip'
+LSP_OPTIONS_MCAST_FLOOD_REPORTS = 'mcast_flood_reports'
+LSP_OPTIONS_MCAST_FLOOD = 'mcast_flood'
 
 HA_CHASSIS_GROUP_DEFAULT_NAME = 'default_ha_chassis_group'
 HA_CHASSIS_GROUP_HIGHEST_PRIORITY = 32767
diff -Nru neutron-17.1.0/neutron/db/db_base_plugin_common.py neutron-17.1.1/neutron/db/db_base_plugin_common.py
--- neutron-17.1.0/neutron/db/db_base_plugin_common.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/db/db_base_plugin_common.py	2021-03-13 02:26:48.000000000 +0100
@@ -217,7 +217,8 @@
 
     def _make_port_dict(self, port, fields=None,
                         process_extensions=True,
-                        with_fixed_ips=True):
+                        with_fixed_ips=True,
+                        bulk=False):
         mac = port["mac_address"]
         if isinstance(mac, netaddr.EUI):
             mac.dialect = netaddr.mac_unix_expanded
@@ -240,8 +241,10 @@
             port_data = port
             if isinstance(port, port_obj.Port):
                 port_data = port.db_obj
+            res['bulk'] = bulk
             resource_extend.apply_funcs(
                 port_def.COLLECTION_NAME, res, port_data)
+            res.pop('bulk')
         return db_utils.resource_fields(res, fields)
 
     def _get_network(self, context, id):
diff -Nru neutron-17.1.0/neutron/db/db_base_plugin_v2.py neutron-17.1.1/neutron/db/db_base_plugin_v2.py
--- neutron-17.1.0/neutron/db/db_base_plugin_v2.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/db/db_base_plugin_v2.py	2021-03-13 02:26:48.000000000 +0100
@@ -1563,7 +1563,9 @@
                                       sorts=sorts, limit=limit,
                                       marker_obj=marker_obj,
                                       page_reverse=page_reverse)
-        items = [self._make_port_dict(c, fields) for c in query]
+        items = [self._make_port_dict(c, fields, bulk=True) for c in query]
+        # TODO(obondarev): use neutron_lib constant
+        resource_extend.apply_funcs('ports_bulk', items, None)
         if limit and page_reverse:
             items.reverse()
         return items
diff -Nru neutron-17.1.0/neutron/db/l3_dvr_db.py neutron-17.1.1/neutron/db/l3_dvr_db.py
--- neutron-17.1.0/neutron/db/l3_dvr_db.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/db/l3_dvr_db.py	2021-03-13 02:26:48.000000000 +0100
@@ -13,6 +13,8 @@
 #    under the License.
 import collections
 
+import netaddr
+from neutron_lib.api.definitions import external_net as extnet_apidef
 from neutron_lib.api.definitions import l3 as l3_apidef
 from neutron_lib.api.definitions import portbindings
 from neutron_lib.api.definitions import portbindings_extended
@@ -48,6 +50,7 @@
 from neutron.extensions import _admin_state_down_before_update_lib
 from neutron.ipam import utils as ipam_utils
 from neutron.objects import agent as ag_obj
+from neutron.objects import base as base_obj
 from neutron.objects import l3agent as rb_obj
 from neutron.objects import router as l3_obj
 
@@ -409,6 +412,15 @@
                 if host_id:
                     return
 
+    @registry.receives(resources.NETWORK, [events.AFTER_DELETE])
+    def delete_fip_namespaces_for_ext_net(self, rtype, event, trigger,
+                                          context, network, **kwargs):
+        if network.get(extnet_apidef.EXTERNAL):
+            # Send the information to all the L3 Agent hosts
+            # to clean up the fip namespace as it is no longer required.
+            self.l3plugin.l3_rpc_notifier.delete_fipnamespace_for_ext_net(
+                context, network['id'])
+
     def _get_ports_for_allowed_address_pair_ip(self, context, network_id,
                                                fixed_ip):
         """Return all active ports associated with the allowed_addr_pair ip."""
@@ -469,6 +481,17 @@
                                 fixed_ip_address))
                         if not addr_pair_active_service_port_list:
                             return
+                        self._inherit_service_port_and_arp_update(
+                            context, addr_pair_active_service_port_list[0])
+
+    def _inherit_service_port_and_arp_update(self, context, service_port):
+        """Function inherits port host bindings for allowed_address_pair."""
+        service_port_dict = self.l3plugin._core_plugin._make_port_dict(
+            service_port)
+        address_pair_list = service_port_dict.get('allowed_address_pairs')
+        for address_pair in address_pair_list:
+            self.update_arp_entry_for_dvr_service_port(context,
+                                                       service_port_dict)
 
     @registry.receives(resources.ROUTER_INTERFACE, [events.BEFORE_CREATE])
     @db_api.retry_if_session_inactive()
@@ -1104,6 +1127,21 @@
         self._populate_mtu_and_subnets_for_ports(context, [agent_port])
         return agent_port
 
+    def _generate_arp_table_and_notify_agent(self, context, fixed_ip,
+                                             mac_address, notifier):
+        """Generates the arp table entry and notifies the l3 agent."""
+        ip_address = fixed_ip['ip_address']
+        subnet = fixed_ip['subnet_id']
+        arp_table = {'ip_address': ip_address,
+                     'mac_address': mac_address,
+                     'subnet_id': subnet}
+        filters = {'fixed_ips': {'subnet_id': [subnet]},
+                   'device_owner': [const.DEVICE_OWNER_DVR_INTERFACE]}
+        ports = self._core_plugin.get_ports(context, filters=filters)
+        routers = [port['device_id'] for port in ports]
+        for router_id in routers:
+            notifier(context, router_id, arp_table)
+
     def _get_subnet_id_for_given_fixed_ip(self, context, fixed_ip, port_dict):
         """Returns the subnet_id that matches the fixedip on a network."""
         filters = {'network_id': [port_dict['network_id']]}
@@ -1112,6 +1150,78 @@
             if ipam_utils.check_subnet_ip(subnet['cidr'], fixed_ip):
                 return subnet['id']
 
+    def _get_allowed_address_pair_fixed_ips(self, context, port_dict):
+        """Returns all fixed_ips associated with the allowed_address_pair."""
+        aa_pair_fixed_ips = []
+        if port_dict.get('allowed_address_pairs'):
+            for address_pair in port_dict['allowed_address_pairs']:
+                aap_ip_cidr = address_pair['ip_address'].split("/")
+                if len(aap_ip_cidr) == 1 or int(aap_ip_cidr[1]) == 32:
+                    subnet_id = self._get_subnet_id_for_given_fixed_ip(
+                        context, aap_ip_cidr[0], port_dict)
+                    if subnet_id is not None:
+                        fixed_ip = {'subnet_id': subnet_id,
+                                    'ip_address': aap_ip_cidr[0]}
+                        aa_pair_fixed_ips.append(fixed_ip)
+                    else:
+                        LOG.debug("Subnet does not match for the given "
+                                  "fixed_ip %s for arp update", aap_ip_cidr[0])
+        return aa_pair_fixed_ips
+
+    def update_arp_entry_for_dvr_service_port(self, context, port_dict):
+        """Notify L3 agents of ARP table entry for dvr service port.
+
+        When a dvr service port goes up, look for the DVR router on
+        the port's subnet, and send the ARP details to all
+        L3 agents hosting the router to add it.
+        If there are any allowed_address_pairs associated with the port
+        those fixed_ips should also be updated in the ARP table.
+        """
+        fixed_ips = port_dict['fixed_ips']
+        if not fixed_ips:
+            return
+        allowed_address_pair_fixed_ips = (
+            self._get_allowed_address_pair_fixed_ips(context, port_dict))
+        changed_fixed_ips = fixed_ips + allowed_address_pair_fixed_ips
+        for fixed_ip in changed_fixed_ips:
+            self._generate_arp_table_and_notify_agent(
+                context, fixed_ip, port_dict['mac_address'],
+                self.l3_rpc_notifier.add_arp_entry)
+
+    def delete_arp_entry_for_dvr_service_port(self, context, port_dict,
+                                              fixed_ips_to_delete=None):
+        """Notify L3 agents of ARP table entry for dvr service port.
+
+        When a dvr service port goes down, look for the DVR
+        router on the port's subnet, and send the ARP details to all
+        L3 agents hosting the router to delete it.
+        If there are any allowed_address_pairs associated with the
+        port, those fixed_ips should be removed from the ARP table.
+        """
+        fixed_ips = port_dict['fixed_ips']
+        if not fixed_ips:
+            return
+        if not fixed_ips_to_delete:
+            allowed_address_pair_fixed_ips = (
+                self._get_allowed_address_pair_fixed_ips(context, port_dict))
+            fixed_ips_to_delete = fixed_ips + allowed_address_pair_fixed_ips
+        for fixed_ip in fixed_ips_to_delete:
+            self._generate_arp_table_and_notify_agent(
+                context, fixed_ip, port_dict['mac_address'],
+                self.l3_rpc_notifier.del_arp_entry)
+
+    def _get_address_pair_active_port_with_fip(
+            self, context, port_dict, port_addr_pair_ip):
+        port_valid_state = (port_dict['admin_state_up'] or
+                            port_dict['status'] == const.PORT_STATUS_ACTIVE)
+        if not port_valid_state:
+            return
+        fips = l3_obj.FloatingIP.get_objects(
+            context, _pager=base_obj.Pager(limit=1),
+            fixed_ip_address=netaddr.IPAddress(port_addr_pair_ip))
+        return self._core_plugin.get_port(
+            context, fips[0].fixed_port_id) if fips else None
+
 
 class L3_NAT_with_dvr_db_mixin(_DVRAgentInterfaceMixin,
                                DVRResourceOperationHandler,
diff -Nru neutron-17.1.0/neutron/db/l3_dvrscheduler_db.py neutron-17.1.1/neutron/db/l3_dvrscheduler_db.py
--- neutron-17.1.0/neutron/db/l3_dvrscheduler_db.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/db/l3_dvrscheduler_db.py	2021-03-13 02:26:48.000000000 +0100
@@ -518,6 +518,19 @@
         return any([r in dvr_routers for r in related_routers])
 
 
+def _dvr_handle_unbound_allowed_addr_pair_add(
+        plugin, context, port, allowed_address_pair):
+    plugin.update_arp_entry_for_dvr_service_port(context, port)
+
+
+def _dvr_handle_unbound_allowed_addr_pair_del(
+        plugin, context, port, allowed_address_pair):
+    aa_fixed_ips = plugin._get_allowed_address_pair_fixed_ips(context, port)
+    if aa_fixed_ips:
+        plugin.delete_arp_entry_for_dvr_service_port(
+            context, port, fixed_ips_to_delete=aa_fixed_ips)
+
+
 def _notify_l3_agent_new_port(resource, event, trigger, **kwargs):
     LOG.debug('Received %(resource)s %(event)s', {
         'resource': resource,
@@ -530,6 +543,7 @@
         l3plugin = directory.get_plugin(plugin_constants.L3)
         context = kwargs['context']
         l3plugin.dvr_handle_new_service_port(context, port)
+        l3plugin.update_arp_entry_for_dvr_service_port(context, port)
 
 
 def _notify_port_delete(event, resource, trigger, **kwargs):
@@ -537,6 +551,14 @@
     port = kwargs['port']
     get_related_hosts_info = kwargs.get("get_related_hosts_info", True)
     l3plugin = directory.get_plugin(plugin_constants.L3)
+    if port:
+        port_host = port.get(portbindings.HOST_ID)
+        allowed_address_pairs_list = port.get('allowed_address_pairs')
+        if allowed_address_pairs_list and port_host:
+            for address_pair in allowed_address_pairs_list:
+                _dvr_handle_unbound_allowed_addr_pair_del(
+                    l3plugin, context, port, address_pair)
+    l3plugin.delete_arp_entry_for_dvr_service_port(context, port)
     removed_routers = l3plugin.get_dvr_routers_to_remove(
         context, port, get_related_hosts_info)
     for info in removed_routers:
@@ -625,7 +647,32 @@
                     context, new_port,
                     dest_host=dest_host,
                     router_id=fip_router_id)
+            l3plugin.update_arp_entry_for_dvr_service_port(
+                context, new_port)
             return
+        # Check for allowed_address_pairs and port state
+        new_port_host = new_port.get(portbindings.HOST_ID)
+        allowed_address_pairs_list = new_port.get('allowed_address_pairs')
+        if allowed_address_pairs_list and new_port_host:
+            new_port_state = new_port.get('admin_state_up')
+            original_port_state = original_port.get('admin_state_up')
+            if new_port_state:
+                # Case were we activate the port from inactive state,
+                # or the same port has additional address_pairs added.
+                for address_pair in allowed_address_pairs_list:
+                    _dvr_handle_unbound_allowed_addr_pair_add(
+                        l3plugin, context, new_port, address_pair)
+                return
+            elif original_port_state:
+                # Case were we deactivate the port from active state.
+                for address_pair in allowed_address_pairs_list:
+                    _dvr_handle_unbound_allowed_addr_pair_del(
+                        l3plugin, context, original_port, address_pair)
+                return
+
+        if kwargs.get('mac_address_updated') or is_fixed_ips_changed:
+            l3plugin.update_arp_entry_for_dvr_service_port(
+                context, new_port)
 
 
 def subscribe():
diff -Nru neutron-17.1.0/neutron/db/securitygroups_db.py neutron-17.1.1/neutron/db/securitygroups_db.py
--- neutron-17.1.0/neutron/db/securitygroups_db.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/db/securitygroups_db.py	2021-03-13 02:26:48.000000000 +0100
@@ -14,6 +14,7 @@
 
 import netaddr
 from neutron_lib.api.definitions import port as port_def
+from neutron_lib.api import extensions
 from neutron_lib.api import validators
 from neutron_lib.callbacks import events
 from neutron_lib.callbacks import exceptions
@@ -860,6 +861,8 @@
 
         :returns: the default security group id for given tenant.
         """
+        if not extensions.is_extension_supported(self, 'security-group'):
+            return
         default_group_id = self._get_default_sg_id(context, tenant_id)
         if default_group_id:
             return default_group_id
@@ -918,7 +921,8 @@
             port_project = port.get('tenant_id')
             default_sg = self._ensure_default_security_group(context,
                                                              port_project)
-            port[ext_sg.SECURITYGROUPS] = [default_sg]
+            if default_sg:
+                port[ext_sg.SECURITYGROUPS] = [default_sg]
 
     def _check_update_deletes_security_groups(self, port):
         """Return True if port has as a security group and it's value
diff -Nru neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/backports.py neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/backports.py
--- neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/backports.py	1970-01-01 01:00:00.000000000 +0100
+++ neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/backports.py	2021-03-13 02:26:48.000000000 +0100
@@ -0,0 +1,36 @@
+# Copyright 2021 Red Hat, Inc.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+#
+# We don't technically require ovsdbapp that has these fixes so
+# just include them here for stable releases
+try:
+    from ovsdbapp.backend.ovs_idl import idlutils
+    frozen_row = idlutils.frozen_row
+except AttributeError:
+    def frozen_row(row):
+        return row._table.rows.IndexEntry(
+            uuid=row.uuid,
+            **{col: getattr(row, col)
+                for col in row._table.columns if hasattr(row, col)})
+
+try:
+    from ovsdbapp.backend.ovs_idl import event as row_event
+    from ovsdbapp import event as ovsdb_event
+
+    RowEventHandler = row_event.RowEventHandler
+except AttributeError:
+    class RowEventHandler(ovsdb_event.RowEventHandler):
+        def notify(self, event, row, updates=None):
+            row = frozen_row(row)
+            super().notify(event, row, updates)
diff -Nru neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py
--- neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py	2021-03-13 02:26:48.000000000 +0100
@@ -661,6 +661,34 @@
                     txn.add(cmd)
         raise periodics.NeverAgain()
 
+    # TODO(lucasagomes): Remove this in the Y cycle
+    # A static spacing value is used here, but this method will only run
+    # once per lock due to the use of periodics.NeverAgain().
+    @periodics.periodic(spacing=600, run_immediately=True)
+    def check_for_mcast_flood_reports(self):
+        cmds = []
+        for port in self._nb_idl.lsp_list().execute(check_error=True):
+            port_type = port.type.strip()
+            if port_type in ("vtep", "localport", "router"):
+                continue
+
+            options = port.options
+            if ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS in options:
+                continue
+
+            options.update({ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true'})
+            if port_type == ovn_const.LSP_TYPE_LOCALNET:
+                options.update({ovn_const.LSP_OPTIONS_MCAST_FLOOD: 'true'})
+
+            cmds.append(self._nb_idl.lsp_set_options(port.name, **options))
+
+        if cmds:
+            with self._nb_idl.transaction(check_error=True) as txn:
+                for cmd in cmds:
+                    txn.add(cmd)
+
+        raise periodics.NeverAgain()
+
 
 class HashRingHealthCheckPeriodics(object):
 
diff -Nru neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py
--- neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py	2021-03-13 02:26:48.000000000 +0100
@@ -298,6 +298,14 @@
             options.update({'requested-chassis':
                             port.get(portbindings.HOST_ID, '')})
 
+        # TODO(lucasagomes): Enable the mcast_flood_reports by default,
+        # according to core OVN developers it shouldn't cause any harm
+        # and will be ignored when mcast_snoop is False. We can revise
+        # this once https://bugzilla.redhat.com/show_bug.cgi?id=1933990
+        # (see comment #3) is fixed in Core OVN.
+        if port_type not in ('vtep', 'localport', 'router'):
+            options.update({ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true'})
+
         device_owner = port.get('device_owner', '')
         sg_ids = ' '.join(utils.get_lsp_security_groups(port))
         return OvnPortInfo(port_type, options, addresses, port_security,
@@ -1557,6 +1565,9 @@
     def create_provnet_port(self, network_id, segment, txn=None):
         tag = segment.get(segment_def.SEGMENTATION_ID, [])
         physnet = segment.get(segment_def.PHYSICAL_NETWORK)
+        options = {'network_name': physnet,
+                   ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true',
+                   ovn_const.LSP_OPTIONS_MCAST_FLOOD: 'true'}
         cmd = self._nb_idl.create_lswitch_port(
             lport_name=utils.ovn_provnet_port_name(segment['id']),
             lswitch_name=utils.ovn_name(network_id),
@@ -1564,7 +1575,7 @@
             external_ids={},
             type=ovn_const.LSP_TYPE_LOCALNET,
             tag=tag,
-            options={'network_name': physnet})
+            options=options)
         self._transaction([cmd], txn=txn)
 
     def delete_provnet_port(self, network_id, segment):
@@ -1951,7 +1962,8 @@
     def create_subnet(self, context, subnet, network):
         if subnet['enable_dhcp']:
             if subnet['ip_version'] == 4:
-                self.update_metadata_port(context, network['id'])
+                self.update_metadata_port(context, network['id'],
+                                          subnet_id=subnet['id'])
             self._add_subnet_dhcp_options(subnet, network)
         db_rev.bump_revision(context, subnet, ovn_const.TYPE_SUBNETS)
 
@@ -1968,7 +1980,8 @@
             subnet['id'])['subnet']
 
         if subnet['enable_dhcp'] or ovn_subnet:
-            self.update_metadata_port(context, network['id'])
+            self.update_metadata_port(context, network['id'],
+                                      subnet_id=subnet['id'])
 
         check_rev_cmd = self._nb_idl.check_revision_number(
             subnet['id'], subnet, ovn_const.TYPE_SUBNETS)
@@ -2076,12 +2089,24 @@
                 # TODO(boden): rehome create_port into neutron-lib
                 p_utils.create_port(self._plugin, context, port)
 
-    def update_metadata_port(self, context, network_id):
+    def update_metadata_port(self, context, network_id, subnet_id=None):
         """Update metadata port.
 
         This function will allocate an IP address for the metadata port of
-        the given network in all its IPv4 subnets.
+        the given network in all its IPv4 subnets or the given subnet.
         """
+        def update_metadata_port_fixed_ips(metadata_port, subnet_ids):
+            wanted_fixed_ips = [
+                {'subnet_id': fixed_ip['subnet_id'],
+                 'ip_address': fixed_ip['ip_address']} for fixed_ip in
+                metadata_port['fixed_ips']]
+            wanted_fixed_ips.extend({'subnet_id': s_id} for s_id in subnet_ids)
+            port = {'id': metadata_port['id'],
+                    'port': {'network_id': network_id,
+                             'fixed_ips': wanted_fixed_ips}}
+            self._plugin.update_port(n_context.get_admin_context(),
+                                     metadata_port['id'], port)
+
         if not ovn_conf.is_ovn_metadata_enabled():
             return
 
@@ -2092,31 +2117,28 @@
                       network_id)
             return
 
+        port_subnet_ids = set(ip['subnet_id'] for ip in
+                              metadata_port['fixed_ips'])
+
+        # If this method is called from "create_subnet" or "update_subnet",
+        # only the fixed IP address from this subnet should be updated in the
+        # metadata port.
+        if subnet_id:
+            if subnet_id not in port_subnet_ids:
+                update_metadata_port_fixed_ips(metadata_port, [subnet_id])
+            return
+
         # Retrieve all subnets in this network
         subnets = self._plugin.get_subnets(context, filters=dict(
             network_id=[network_id], ip_version=[4]))
 
         subnet_ids = set(s['id'] for s in subnets)
-        port_subnet_ids = set(ip['subnet_id'] for ip in
-                              metadata_port['fixed_ips'])
 
         # Find all subnets where metadata port doesn't have an IP in and
         # allocate one.
         if subnet_ids != port_subnet_ids:
-            wanted_fixed_ips = []
-            for fixed_ip in metadata_port['fixed_ips']:
-                wanted_fixed_ips.append(
-                    {'subnet_id': fixed_ip['subnet_id'],
-                     'ip_address': fixed_ip['ip_address']})
-            wanted_fixed_ips.extend(
-                dict(subnet_id=s)
-                for s in subnet_ids - port_subnet_ids)
-
-            port = {'id': metadata_port['id'],
-                    'port': {'network_id': network_id,
-                             'fixed_ips': wanted_fixed_ips}}
-            self._plugin.update_port(n_context.get_admin_context(),
-                                     metadata_port['id'], port)
+            update_metadata_port_fixed_ips(metadata_port,
+                                           subnet_ids - port_subnet_ids)
 
     def get_parent_port(self, port_id):
         return self._nb_idl.get_parent_port(port_id)
diff -Nru neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py
--- neutron-17.1.0/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py	2021-03-13 02:26:48.000000000 +0100
@@ -27,7 +27,6 @@
 from ovsdbapp.backend.ovs_idl import connection
 from ovsdbapp.backend.ovs_idl import event as row_event
 from ovsdbapp.backend.ovs_idl import idlutils
-from ovsdbapp import event
 
 from neutron.common.ovn import constants as ovn_const
 from neutron.common.ovn import exceptions
@@ -35,6 +34,7 @@
 from neutron.common.ovn import utils
 from neutron.conf.plugins.ml2.drivers.ovn import ovn_conf
 from neutron.db import ovn_hash_ring_db
+from neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb import backports
 
 
 CONF = cfg.CONF
@@ -358,7 +358,7 @@
         self.event_name = 'PortGroupCreated'
 
 
-class OvnDbNotifyHandler(event.RowEventHandler):
+class OvnDbNotifyHandler(backports.RowEventHandler):
     def __init__(self, driver):
         super(OvnDbNotifyHandler, self).__init__()
         self.driver = driver
@@ -374,7 +374,7 @@
 
 class BaseOvnIdl(Ml2OvnIdlBase):
     def __init__(self, remote, schema):
-        self.notify_handler = event.RowEventHandler()
+        self.notify_handler = backports.RowEventHandler()
         super(BaseOvnIdl, self).__init__(remote, schema)
 
     @classmethod
diff -Nru neutron-17.1.0/neutron/services/portforwarding/pf_plugin.py neutron-17.1.1/neutron/services/portforwarding/pf_plugin.py
--- neutron-17.1.0/neutron/services/portforwarding/pf_plugin.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/portforwarding/pf_plugin.py	2021-03-13 02:26:48.000000000 +0100
@@ -33,6 +33,7 @@
 from neutron_lib.plugins import constants
 from neutron_lib.plugins import directory
 from oslo_config import cfg
+from oslo_db import exception as oslo_db_exc
 from oslo_log import log as logging
 
 from neutron._i18n import _
@@ -430,7 +431,7 @@
                 pf_obj.update_fields(port_forwarding, reset_changes=True)
                 self._check_port_forwarding_update(context, pf_obj)
                 pf_obj.update()
-        except obj_exc.NeutronDbObjectDuplicateEntry:
+        except oslo_db_exc.DBDuplicateEntry:
             (__, conflict_params) = self._find_existing_port_forwarding(
                 context, floatingip_id, pf_obj.to_dict())
             message = _("A duplicate port forwarding entry with same "
diff -Nru neutron-17.1.0/neutron/services/qos/drivers/manager.py neutron-17.1.1/neutron/services/qos/drivers/manager.py
--- neutron-17.1.0/neutron/services/qos/drivers/manager.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/qos/drivers/manager.py	2021-03-13 02:26:48.000000000 +0100
@@ -154,6 +154,18 @@
 
         return False
 
+    def validate_rule_for_network(self, context, rule, network_id):
+        for driver in self._drivers:
+            if driver.is_rule_supported(rule):
+                # https://review.opendev.org/c/openstack/neutron-lib/+/774083
+                # is not present, in this release, in neutron-lib.
+                if hasattr(driver, 'validate_rule_for_network'):
+                    return driver.validate_rule_for_network(context, rule,
+                                                            network_id)
+                return True
+
+        return False
+
     @property
     def supported_rule_types(self):
         if not self._drivers:
diff -Nru neutron-17.1.0/neutron/services/qos/drivers/openvswitch/driver.py neutron-17.1.1/neutron/services/qos/drivers/openvswitch/driver.py
--- neutron-17.1.0/neutron/services/qos/drivers/openvswitch/driver.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/qos/drivers/openvswitch/driver.py	2021-03-13 02:26:48.000000000 +0100
@@ -60,11 +60,14 @@
             requires_rpc_notifications=True)
 
     def validate_rule_for_port(self, context, rule, port):
+        return self.validate_rule_for_network(context, rule, port.network_id)
+
+    def validate_rule_for_network(self, context, rule, network_id):
         # Minimum-bandwidth rule is only supported on networks whose
         # first segment is backed by a physnet.
         if rule.rule_type == qos_consts.RULE_TYPE_MINIMUM_BANDWIDTH:
             net = network_object.Network.get_object(
-                context, id=port.network_id)
+                context, id=network_id)
             physnet = net.segments[0].physical_network
             if physnet is None:
                 return False
diff -Nru neutron-17.1.0/neutron/services/qos/qos_plugin.py neutron-17.1.1/neutron/services/qos/qos_plugin.py
--- neutron-17.1.0/neutron/services/qos/qos_plugin.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/qos/qos_plugin.py	2021-03-13 02:26:48.000000000 +0100
@@ -44,10 +44,16 @@
 from neutron.objects import ports as ports_object
 from neutron.objects.qos import policy as policy_object
 from neutron.objects.qos import qos_policy_validator as checker
+from neutron.objects.qos import rule as rule_object
 from neutron.objects.qos import rule_type as rule_type_object
 from neutron.services.qos.drivers import manager
 
 
+class QosRuleNotSupportedByNetwork(lib_exc.Conflict):
+    message = _("Rule %(rule_type)s is not supported "
+                "by network %(network_id)s")
+
+
 @resource_extend.has_resource_extenders
 class QoSPlugin(qos.QoSPluginBase):
     """Implementation of the Neutron QoS Service Plugin.
@@ -85,6 +91,10 @@
             self._validate_update_network_callback,
             callbacks_resources.NETWORK,
             callbacks_events.PRECOMMIT_UPDATE)
+        callbacks_registry.subscribe(
+            self._validate_create_network_callback,
+            callbacks_resources.NETWORK,
+            callbacks_events.PRECOMMIT_CREATE)
 
     @staticmethod
     @resource_extend.extends([port_def.COLLECTION_NAME])
@@ -102,9 +112,34 @@
         port_res['resource_request'] = None
         if not qos_id:
             return port_res
-        qos_policy = policy_object.QosPolicy.get_object(
-            context.get_admin_context(), id=qos_id)
 
+        if port_res.get('bulk'):
+            port_res['resource_request'] = {
+                'qos_id': qos_id,
+                'network_id': port_db.network_id,
+                'vnic_type': port_res[portbindings.VNIC_TYPE]}
+            return port_res
+
+        min_bw_rules = rule_object.QosMinimumBandwidthRule.get_objects(
+            context.get_admin_context(), qos_policy_id=qos_id)
+        resources = QoSPlugin._get_resources(min_bw_rules)
+        if not resources:
+            return port_res
+
+        segments = network_object.NetworkSegment.get_objects(
+            context.get_admin_context(), network_id=port_db.network_id)
+        traits = QoSPlugin._get_traits(port_res[portbindings.VNIC_TYPE],
+                                       segments)
+        if not traits:
+            return port_res
+
+        port_res['resource_request'] = {
+            'required': traits,
+            'resources': resources}
+        return port_res
+
+    @staticmethod
+    def _get_resources(min_bw_rules):
         resources = {}
         # NOTE(ralonsoh): we should move this translation dict to n-lib.
         rule_direction_class = {
@@ -113,34 +148,71 @@
             nl_constants.EGRESS_DIRECTION:
                 pl_constants.CLASS_NET_BW_EGRESS_KBPS
         }
-        for rule in qos_policy.rules:
-            if rule.rule_type == qos_consts.RULE_TYPE_MINIMUM_BANDWIDTH:
-                resources[rule_direction_class[rule.direction]] = rule.min_kbps
-        if not resources:
-            return port_res
-
-        # NOTE(ralonsoh): we should not rely on the current execution order of
-        # the port extending functions. Although here we have
-        # port_res[VNIC_TYPE], we should retrieve this value from the port DB
-        # object instead.
-        vnic_trait = pl_utils.vnic_type_trait(
-            port_res[portbindings.VNIC_TYPE])
+        for rule in min_bw_rules:
+            resources[rule_direction_class[rule.direction]] = rule.min_kbps
+        return resources
 
+    @staticmethod
+    def _get_traits(vnic_type, segments):
         # TODO(lajoskatona): Change to handle all segments when any traits
         # support will be available. See Placement spec:
         # https://review.opendev.org/565730
-        first_segment = network_object.NetworkSegment.get_objects(
-            context.get_admin_context(), network_id=port_db.network_id)[0]
-
+        first_segment = segments[0]
         if not first_segment or not first_segment.physical_network:
-            return port_res
+            return []
         physnet_trait = pl_utils.physnet_trait(
             first_segment.physical_network)
+        # NOTE(ralonsoh): we should not rely on the current execution order of
+        # the port extending functions. Although here we have
+        # port_res[VNIC_TYPE], we should retrieve this value from the port DB
+        # object instead.
+        vnic_trait = pl_utils.vnic_type_trait(vnic_type)
 
-        port_res['resource_request'] = {
-            'required': [physnet_trait, vnic_trait],
-            'resources': resources}
-        return port_res
+        return [physnet_trait, vnic_trait]
+
+    @staticmethod
+    # TODO(obondarev): use neutron_lib constant
+    @resource_extend.extends(['ports_bulk'])
+    def _extend_port_resource_request_bulk(ports_res, noop):
+        """Add resource request to a list of ports."""
+        min_bw_rules = dict()
+        net_segments = dict()
+
+        for port_res in ports_res:
+            if port_res.get('resource_request') is None:
+                continue
+            qos_id = port_res['resource_request'].pop('qos_id', None)
+            if not qos_id:
+                port_res['resource_request'] = None
+                continue
+
+            net_id = port_res['resource_request'].pop('network_id')
+            vnic_type = port_res['resource_request'].pop('vnic_type')
+
+            if qos_id not in min_bw_rules:
+                rules = rule_object.QosMinimumBandwidthRule.get_objects(
+                    context.get_admin_context(), qos_policy_id=qos_id)
+                min_bw_rules[qos_id] = rules
+
+            resources = QoSPlugin._get_resources(min_bw_rules[qos_id])
+            if not resources:
+                continue
+
+            if net_id not in net_segments:
+                segments = network_object.NetworkSegment.get_objects(
+                    context.get_admin_context(),
+                    network_id=net_id)
+                net_segments[net_id] = segments
+
+            traits = QoSPlugin._get_traits(vnic_type, net_segments[net_id])
+            if not traits:
+                continue
+
+            port_res['resource_request'] = {
+                'required': traits,
+                'resources': resources}
+
+        return ports_res
 
     def _get_ports_with_policy(self, context, policy):
         networks_ids = policy.get_bound_networks()
@@ -189,6 +261,20 @@
 
         self.validate_policy_for_port(context, policy, updated_port)
 
+    def _validate_create_network_callback(self, resource, event, trigger,
+                                          **kwargs):
+        context = kwargs['context']
+        network_id = kwargs['network']['id']
+        network = network_object.Network.get_object(context, id=network_id)
+
+        policy_id = network.qos_policy_id
+        if policy_id is None:
+            return
+
+        policy = policy_object.QosPolicy.get_object(
+            context.elevated(), id=policy_id)
+        self.validate_policy_for_network(context, policy, network_id)
+
     def _validate_update_network_callback(self, resource, event, trigger,
                                           payload=None):
         context = payload.context
@@ -203,6 +289,9 @@
 
         policy = policy_object.QosPolicy.get_object(
             context.elevated(), id=policy_id)
+        self.validate_policy_for_network(
+            context, policy, network_id=updated_network['id'])
+
         ports = ports_object.Port.get_objects(
                 context, network_id=updated_network['id'])
         # Filter only this ports which don't have overwritten policy
@@ -226,6 +315,13 @@
                 raise qos_exc.QosRuleNotSupported(rule_type=rule.rule_type,
                                                   port_id=port['id'])
 
+    def validate_policy_for_network(self, context, policy, network_id):
+        for rule in policy.rules:
+            if not self.driver_manager.validate_rule_for_network(
+                    context, rule, network_id):
+                raise QosRuleNotSupportedByNetwork(
+                    rule_type=rule.rule_type, network_id=network_id)
+
     def reject_min_bw_rule_updates(self, context, policy):
         ports = self._get_ports_with_policy(context, policy)
         for port in ports:
diff -Nru neutron-17.1.0/neutron/services/trunk/plugin.py neutron-17.1.1/neutron/services/trunk/plugin.py
--- neutron-17.1.0/neutron/services/trunk/plugin.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/trunk/plugin.py	2021-03-13 02:26:48.000000000 +0100
@@ -79,17 +79,44 @@
                             'port_id': x.port_id}
                 for x in port_db.trunk_port.sub_ports
             }
-            core_plugin = directory.get_plugin()
-            ports = core_plugin.get_ports(
-                context.get_admin_context(), filters={'id': subports})
-            for port in ports:
-                subports[port['id']]['mac_address'] = port['mac_address']
+            if not port_res.get('bulk'):
+                core_plugin = directory.get_plugin()
+                ports = core_plugin.get_ports(
+                    context.get_admin_context(), filters={'id': subports})
+                for port in ports:
+                    subports[port['id']]['mac_address'] = port['mac_address']
             trunk_details = {'trunk_id': port_db.trunk_port.id,
                              'sub_ports': list(subports.values())}
             port_res['trunk_details'] = trunk_details
 
         return port_res
 
+    @staticmethod
+    # TODO(obondarev): use neutron_lib constant
+    @resource_extend.extends(['ports_bulk'])
+    def _extend_port_trunk_details_bulk(ports_res, noop):
+        """Add trunk subport details to a list of ports."""
+        subport_ids = []
+        trunk_ports = []
+        for p in ports_res:
+            if 'trunk_details' in p and 'subports' in p['trunk_details']:
+                trunk_ports.append(p)
+                for subp in p['trunk_details']['subports']:
+                    subport_ids.append(subp['port_id'])
+        if not subport_ids:
+            return ports_res
+
+        core_plugin = directory.get_plugin()
+        subports = core_plugin.get_ports(
+            context.get_admin_context(), filters={'id': subport_ids})
+        subport_macs = {p['id']: p['mac_address'] for p in subports}
+
+        for tp in trunk_ports:
+            for subp in tp['trunk_details']['subports']:
+                subp['mac_address'] = subport_macs[subp['port_id']]
+
+        return ports_res
+
     def check_compatibility(self):
         """Verify the plugin can load correctly and fail otherwise."""
         self.check_driver_compatibility()
diff -Nru neutron-17.1.0/neutron/services/trunk/rpc/server.py neutron-17.1.1/neutron/services/trunk/rpc/server.py
--- neutron-17.1.0/neutron/services/trunk/rpc/server.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/services/trunk/rpc/server.py	2021-03-13 02:26:48.000000000 +0100
@@ -127,6 +127,12 @@
         trunk_port_id = trunk.port_id
         trunk_port = self.core_plugin.get_port(context, trunk_port_id)
         trunk_host = trunk_port.get(portbindings.HOST_ID)
+        migrating_to_host = trunk_port.get(
+            portbindings.PROFILE, {}).get('migrating_to')
+        if migrating_to_host and trunk_host != migrating_to_host:
+            # Trunk is migrating now, so lets update host of the subports
+            # to the new host already
+            trunk_host = migrating_to_host
 
         # NOTE(status_police) Set the trunk in BUILD state before
         # processing subport bindings. The trunk will stay in BUILD
diff -Nru neutron-17.1.0/neutron/tests/fullstack/resources/client.py neutron-17.1.1/neutron/tests/fullstack/resources/client.py
--- neutron-17.1.0/neutron/tests/fullstack/resources/client.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/fullstack/resources/client.py	2021-03-13 02:26:48.000000000 +0100
@@ -81,7 +81,7 @@
 
     def create_network(self, tenant_id, name=None, external=False,
                        network_type=None, segmentation_id=None,
-                       physical_network=None, mtu=None):
+                       physical_network=None, mtu=None, qos_policy_id=None):
         resource_type = 'network'
 
         name = name or utils.get_rand_name(prefix=resource_type)
@@ -96,6 +96,8 @@
             spec['provider:physical_network'] = physical_network
         if mtu is not None:
             spec['mtu'] = mtu
+        if qos_policy_id is not None:
+            spec['qos_policy_id'] = qos_policy_id
 
         return self._create_resource(resource_type, spec)
 
diff -Nru neutron-17.1.0/neutron/tests/fullstack/resources/config.py neutron-17.1.1/neutron/tests/fullstack/resources/config.py
--- neutron-17.1.0/neutron/tests/fullstack/resources/config.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/fullstack/resources/config.py	2021-03-13 02:26:48.000000000 +0100
@@ -82,7 +82,7 @@
                      'password': rabbitmq_environment.password,
                      'host': rabbitmq_environment.host,
                      'vhost': rabbitmq_environment.vhost},
-                'api_workers': '2',
+                'api_workers': str(env_desc.api_workers),
             },
             'database': {
                 'connection': connection,
diff -Nru neutron-17.1.0/neutron/tests/fullstack/resources/environment.py neutron-17.1.1/neutron/tests/fullstack/resources/environment.py
--- neutron-17.1.0/neutron/tests/fullstack/resources/environment.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/fullstack/resources/environment.py	2021-03-13 02:26:48.000000000 +0100
@@ -40,7 +40,8 @@
                  global_mtu=constants.DEFAULT_NETWORK_MTU,
                  debug_iptables=False, log=False, report_bandwidths=False,
                  has_placement=False, placement_port=None,
-                 dhcp_scheduler_class=None, ml2_extension_drivers=None):
+                 dhcp_scheduler_class=None, ml2_extension_drivers=None,
+                 api_workers=1):
         self.network_type = network_type
         self.l2_pop = l2_pop
         self.qos = qos
@@ -62,6 +63,7 @@
         if self.log:
             self.service_plugins += ',log'
         self.ml2_extension_drivers = ml2_extension_drivers
+        self.api_workers = api_workers
 
     @property
     def tunneling_enabled(self):
diff -Nru neutron-17.1.0/neutron/tests/fullstack/test_dhcp_agent.py neutron-17.1.1/neutron/tests/fullstack/test_dhcp_agent.py
--- neutron-17.1.0/neutron/tests/fullstack/test_dhcp_agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/fullstack/test_dhcp_agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -38,6 +38,7 @@
     ]
     boot_vm_for_test = True
     dhcp_scheduler_class = None
+    api_workers = 1
 
     def setUp(self):
         host_descriptions = [
@@ -52,6 +53,7 @@
                 arp_responder=False,
                 agent_down_time=self.agent_down_time,
                 dhcp_scheduler_class=self.dhcp_scheduler_class,
+                api_workers=self.api_workers,
             ),
             host_descriptions)
 
@@ -205,6 +207,7 @@
     agent_down_time = 30
     number_of_hosts = 2
     boot_vm_for_test = False
+    api_workers = 2
     dhcp_scheduler_class = ('neutron.tests.fullstack.schedulers.dhcp.'
                             'AlwaysTheOtherAgentScheduler')
 
diff -Nru neutron-17.1.0/neutron/tests/fullstack/test_qos.py neutron-17.1.1/neutron/tests/fullstack/test_qos.py
--- neutron-17.1.0/neutron/tests/fullstack/test_qos.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/fullstack/test_qos.py	2021-03-13 02:26:48.000000000 +0100
@@ -707,6 +707,31 @@
             queues = '\nList of OVS Queue registers:\n%s' % '\n'.join(queues)
             self.fail(queuenum + qoses + queues)
 
+    def test_min_bw_qos_create_network_vxlan_not_supported(self):
+        qos_policy = self._create_qos_policy()
+        qos_policy_id = qos_policy['id']
+        self.safe_client.create_minimum_bandwidth_rule(
+            self.tenant_id, qos_policy_id, MIN_BANDWIDTH, self.direction)
+        network_args = {'network_type': 'vxlan',
+                        'qos_policy_id': qos_policy_id}
+        self.assertRaises(
+            exceptions.Conflict,
+            self.safe_client.create_network,
+            self.tenant_id, name='network-test', **network_args)
+
+    def test_min_bw_qos_update_network_vxlan_not_supported(self):
+        network_args = {'network_type': 'vxlan'}
+        network = self.safe_client.create_network(
+            self.tenant_id, name='network-test', **network_args)
+        qos_policy = self._create_qos_policy()
+        qos_policy_id = qos_policy['id']
+        self.safe_client.create_minimum_bandwidth_rule(
+            self.tenant_id, qos_policy_id, MIN_BANDWIDTH, self.direction)
+        self.assertRaises(
+            exceptions.Conflict,
+            self.client.update_network, network['id'],
+            body={'network': {'qos_policy_id': qos_policy_id}})
+
     def test_min_bw_qos_port_removed(self):
         """Test if min BW limit config is properly removed when port removed.
 
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/common/test_ovs_lib.py neutron-17.1.1/neutron/tests/functional/agent/common/test_ovs_lib.py
--- neutron-17.1.0/neutron/tests/functional/agent/common/test_ovs_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/common/test_ovs_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -485,4 +485,4 @@
                 ipv6_port_options = interface['options']
         self.assertEqual(p_const.TYPE_GRE, ipv4_port_type)
         self.assertEqual(ovs_lib.TYPE_GRE_IP6, ipv6_port_type)
-        self.assertEqual('legacy', ipv6_port_options.get('packet_type'))
+        self.assertEqual('legacy_l2', ipv6_port_options.get('packet_type'))
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/l3/test_dvr_router.py neutron-17.1.1/neutron/tests/functional/agent/l3/test_dvr_router.py
--- neutron-17.1.0/neutron/tests/functional/agent/l3/test_dvr_router.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/l3/test_dvr_router.py	2021-03-13 02:26:48.000000000 +0100
@@ -781,6 +781,54 @@
             self._assert_iptables_rules_exist(
                 iptables_mgr, 'nat', expected_rules)
 
+    def test_dvr_router_fip_associations_exist_when_router_reenabled(self):
+        """Test to validate the fip associations when router is re-enabled.
+
+        This test validates the fip associations when the router is disabled
+        and enabled back again. This test is specifically for the host where
+        snat namespace is not created or gateway port is binded on other host.
+        """
+        self.agent.conf.agent_mode = 'dvr_snat'
+        router_info = self.generate_dvr_router_info(enable_snat=True)
+        # Ensure agent does not create snat namespace by changing gw_port_host
+        router_info['gw_port_host'] = 'agent2'
+        router_info_copy = copy.deepcopy(router_info)
+        router1 = self.manage_router(self.agent, router_info)
+
+        fip_ns_name = router1.fip_ns.name
+        self.assertTrue(self._namespace_exists(router1.fip_ns.name))
+
+        # Simulate disable router
+        self.agent._safe_router_removed(router1.router['id'])
+        self.assertFalse(self._namespace_exists(router1.ns_name))
+        self.assertTrue(self._namespace_exists(fip_ns_name))
+
+        # Simulated enable router
+        router_updated = self.manage_router(self.agent, router_info_copy)
+        self._assert_dvr_floating_ips(router_updated)
+
+    def test_dvr_router_fip_associations_exist_when_snat_removed(self):
+        """Test to validate the fip associations when snat is removed.
+
+        This test validates the fip associations when the snat is removed from
+        the agent. The fip associations should exist when the snat is moved to
+        another l3 agent.
+        """
+        self.agent.conf.agent_mode = 'dvr_snat'
+        router_info = self.generate_dvr_router_info(enable_snat=True)
+        router_info_copy = copy.deepcopy(router_info)
+        router1 = self.manage_router(self.agent, router_info)
+
+        # Remove gateway port host and the binding host_id to simulate
+        # removal of snat from l3 agent
+        router_info_copy['gw_port_host'] = ''
+        router_info_copy['gw_port']['binding:host_id'] = ''
+        router_info_copy['gw_port']['binding:vif_type'] = 'unbound'
+        router_info_copy['gw_port']['binding:vif_details'] = {}
+        self.agent._process_updated_router(router_info_copy)
+        router_updated = self.agent.router_info[router1.router['id']]
+        self._assert_dvr_floating_ips(router_updated)
+
     def test_dvr_router_with_ha_for_fip_disassociation(self):
         """Test to validate the fip rules are deleted in dvr_snat_ha router.
 
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/linux/test_ip_lib.py neutron-17.1.1/neutron/tests/functional/agent/linux/test_ip_lib.py
--- neutron-17.1.0/neutron/tests/functional/agent/linux/test_ip_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/linux/test_ip_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -14,6 +14,7 @@
 #    under the License.
 
 import collections
+import copy
 import itertools
 import signal
 
@@ -1013,3 +1014,83 @@
             devices_filtered = self.device.addr.list(scope=scope)
             devices_cidr = {device['cidr'] for device in devices_filtered}
             self.assertIn(_ip, devices_cidr)
+
+
+class GetDevicesWithIpTestCase(functional_base.BaseSudoTestCase):
+
+    def setUp(self):
+        super().setUp()
+        self.namespace = self.useFixture(net_helpers.NamespaceFixture()).name
+        self.devices = []
+        self.num_devices = 5
+        self.num_devices_with_ip = 3
+        for idx in range(self.num_devices):
+            dev_name = 'test_device_%s' % idx
+            ip_lib.IPWrapper(self.namespace).add_dummy(dev_name)
+            device = ip_lib.IPDevice(dev_name, namespace=self.namespace)
+            device.link.set_up()
+            self.devices.append(device)
+
+        self.cidrs = [netaddr.IPNetwork('10.10.0.0/24'),
+                      netaddr.IPNetwork('10.20.0.0/24'),
+                      netaddr.IPNetwork('2001:db8:1234:1111::/64'),
+                      netaddr.IPNetwork('2001:db8:1234:2222::/64')]
+        for idx in range(self.num_devices_with_ip):
+            for cidr in self.cidrs:
+                self.devices[idx].addr.add(str(cidr.ip + idx) + '/' +
+                                           str(cidr.netmask.netmask_bits()))
+
+    @staticmethod
+    def _remove_loopback_interface(ip_addresses):
+        return [ipa for ipa in ip_addresses if
+                ipa['name'] != ip_lib.LOOPBACK_DEVNAME]
+
+    @staticmethod
+    def _remove_ipv6_scope_link(ip_addresses):
+        # Remove all IPv6 addresses with scope link (fe80::...).
+        return [ipa for ipa in ip_addresses if not (
+                ipa['scope'] == 'link' and utils.get_ip_version(ipa['cidr']))]
+
+    @staticmethod
+    def _pop_ip_address(ip_addresses, cidr):
+        for idx, ip_address in enumerate(copy.deepcopy(ip_addresses)):
+            if cidr == ip_address['cidr']:
+                ip_addresses.pop(idx)
+                return
+
+    def test_get_devices_with_ip(self):
+        ip_addresses = ip_lib.get_devices_with_ip(self.namespace)
+        ip_addresses = self._remove_loopback_interface(ip_addresses)
+        ip_addresses = self._remove_ipv6_scope_link(ip_addresses)
+        self.assertEqual(self.num_devices_with_ip * len(self.cidrs),
+                         len(ip_addresses))
+        for idx in range(self.num_devices_with_ip):
+            for cidr in self.cidrs:
+                cidr = (str(cidr.ip + idx) + '/' +
+                        str(cidr.netmask.netmask_bits()))
+                self._pop_ip_address(ip_addresses, cidr)
+
+        self.assertEqual(0, len(ip_addresses))
+
+    def test_get_devices_with_ip_name(self):
+        for idx in range(self.num_devices_with_ip):
+            dev_name = 'test_device_%s' % idx
+            ip_addresses = ip_lib.get_devices_with_ip(self.namespace,
+                                                      name=dev_name)
+            ip_addresses = self._remove_loopback_interface(ip_addresses)
+            ip_addresses = self._remove_ipv6_scope_link(ip_addresses)
+
+            for cidr in self.cidrs:
+                cidr = (str(cidr.ip + idx) + '/' +
+                        str(cidr.netmask.netmask_bits()))
+                self._pop_ip_address(ip_addresses, cidr)
+
+            self.assertEqual(0, len(ip_addresses))
+
+        for idx in range(self.num_devices_with_ip, self.num_devices):
+            dev_name = 'test_device_%s' % idx
+            ip_addresses = ip_lib.get_devices_with_ip(self.namespace,
+                                                      name=dev_name)
+            ip_addresses = self._remove_loopback_interface(ip_addresses)
+            ip_addresses = self._remove_ipv6_scope_link(ip_addresses)
+            self.assertEqual(0, len(ip_addresses))
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py neutron-17.1.1/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py
--- neutron-17.1.0/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py	2021-03-13 02:26:48.000000000 +0100
@@ -21,6 +21,7 @@
 
  - sudo testing is enabled (see neutron.tests.functional.base for details)
 """
+import time
 
 from oslo_config import cfg
 
@@ -129,6 +130,9 @@
             lambda: self._expected_devices_events(removed_devices, 'removed'))
         # restart
         self.monitor.stop(block=True)
+        # NOTE(slaweq): lets give async process few more seconds to receive
+        # "error" from the old ovsdb monitor process and then start new one
+        time.sleep(5)
         self.monitor.start(block=True, timeout=60)
         try:
             utils.wait_until_true(
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/ovn/metadata/test_metadata_agent.py neutron-17.1.1/neutron/tests/functional/agent/ovn/metadata/test_metadata_agent.py
--- neutron-17.1.0/neutron/tests/functional/agent/ovn/metadata/test_metadata_agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/ovn/metadata/test_metadata_agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -13,6 +13,7 @@
 #    License for the specific language governing permissions and limitations
 #    under the License.
 
+import re
 from unittest import mock
 
 from oslo_config import fixture as fixture_config
@@ -21,6 +22,7 @@
 from ovsdbapp.backend.ovs_idl import idlutils
 from ovsdbapp.tests.functional.schema.ovn_southbound import event as test_event
 
+from neutron.agent.linux import iptables_manager
 from neutron.agent.ovn.metadata import agent
 from neutron.agent.ovn.metadata import ovsdb
 from neutron.agent.ovn.metadata import server as metadata_server
@@ -28,6 +30,7 @@
 from neutron.common import utils as n_utils
 from neutron.conf.agent.metadata import config as meta_config
 from neutron.conf.agent.ovn.metadata import config as meta_config_ovn
+from neutron.tests.common import net_helpers
 from neutron.tests.functional import base
 
 
@@ -56,7 +59,12 @@
         super(TestMetadataAgent, self).setUp()
         self.handler = self.sb_api.idl.notify_handler
         # We only have OVN NB and OVN SB running for functional tests
-        mock.patch.object(ovsdb, 'MetadataAgentOvsIdl').start()
+        self.mock_ovsdb_idl = mock.Mock()
+        mock_metadata_instance = mock.Mock()
+        mock_metadata_instance.start.return_value = self.mock_ovsdb_idl
+        mock_metadata = mock.patch.object(
+            ovsdb, 'MetadataAgentOvsIdl').start()
+        mock_metadata.return_value = mock_metadata_instance
         self._mock_get_ovn_br = mock.patch.object(
             agent.MetadataAgent,
             '_get_ovn_bridge',
@@ -303,3 +311,19 @@
             ('external_ids', {'test': 'value'})).execute(check_error=True)
         self.assertTrue(event2.wait())
         self.assertFalse(event.wait())
+
+    def test__ensure_datapath_checksum_if_dpdk(self):
+        self.mock_ovsdb_idl.db_get.return_value.execute.return_value = (
+            ovn_const.CHASSIS_DATAPATH_NETDEV)
+        regex = re.compile(r'-A POSTROUTING -p tcp -m tcp '
+                           r'-j CHECKSUM --checksum-fill')
+        namespace = self.useFixture(net_helpers.NamespaceFixture()).name
+        self.agent._ensure_datapath_checksum(namespace)
+        iptables_mgr = iptables_manager.IptablesManager(
+            use_ipv6=True, nat=False, namespace=namespace, external_lock=False)
+        for rule in iptables_mgr.get_rules_for_table('mangle'):
+            if regex.match(rule):
+                return
+        else:
+            self.fail('Rule not found in "mangle" table, in namespace %s' %
+                      namespace)
diff -Nru neutron-17.1.0/neutron/tests/functional/agent/test_ovs_lib.py neutron-17.1.1/neutron/tests/functional/agent/test_ovs_lib.py
--- neutron-17.1.0/neutron/tests/functional/agent/test_ovs_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/agent/test_ovs_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -233,6 +233,7 @@
         attrs = {
             'remote_ip': '2001:db8:200::1',
             'local_ip': '2001:db8:100::1',
+            'packet_type': 'legacy_l2',
         }
         self._test_add_tunnel_port(
             attrs, expected_tunnel_type=ovs_lib.TYPE_GRE_IP6)
diff -Nru neutron-17.1.0/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py neutron-17.1.1/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py
--- neutron-17.1.0/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py	2021-03-13 02:26:48.000000000 +0100
@@ -19,6 +19,7 @@
 from neutron_lib import constants
 from oslo_config import cfg
 from oslo_utils import uuidutils
+from ovsdbapp.backend.ovs_idl import event
 from ovsdbapp.tests.functional import base as ovs_base
 
 from neutron.common.ovn import constants as ovn_const
@@ -715,13 +716,11 @@
         ovn_localnetport = self._find_port_row_by_name(
             utils.ovn_provnet_port_name(seg_db[0]['id']))
         self.assertEqual(ovn_localnetport.tag, [100])
-        self.assertEqual(ovn_localnetport.options,
-                         {'network_name': 'physnet1'})
+        self.assertEqual(ovn_localnetport.options['network_name'], 'physnet1')
         seg_2 = self.create_segment(n1['id'], 'physnet2', '222')
         ovn_localnetport = self._find_port_row_by_name(
             utils.ovn_provnet_port_name(seg_2['id']))
-        self.assertEqual(ovn_localnetport.options,
-                         {'network_name': 'physnet2'})
+        self.assertEqual(ovn_localnetport.options['network_name'], 'physnet2')
         self.assertEqual(ovn_localnetport.tag, [222])
 
         # Delete segments and ensure that localnet
@@ -744,23 +743,70 @@
         self.assertIsNone(ovn_localnetport)
 
 
+class AgentWaitEvent(event.WaitEvent):
+    """Wait for a list of Chassis to be created"""
+
+    ONETIME = False
+
+    def __init__(self, driver, chassis_names):
+        table = driver.agent_chassis_table
+        events = (self.ROW_CREATE,)
+        self.chassis_names = chassis_names
+        super().__init__(events, table, None)
+        self.event_name = 'AgentWaitEvent'
+
+    def match_fn(self, event, row, old):
+        return row.name in self.chassis_names
+
+    def run(self, event, row, old):
+        self.chassis_names.remove(row.name)
+        if not self.chassis_names:
+            self.event.set()
+
+
 class TestAgentApi(base.TestOVNFunctionalBase):
+    TEST_AGENT = 'test'
 
-    def setUp(self):
+    def setUp(self, *args):
         super().setUp()
-        self.host = 'test-host'
-        self.controller_agent = self.add_fake_chassis(self.host)
+        self.host = n_utils.get_rand_name(prefix='testhost-')
         self.plugin = self.mech_driver._plugin
-        agent = {'agent_type': 'test', 'binary': '/bin/test',
-                 'host': self.host, 'topic': 'test_topic'}
-        _, status = self.plugin.create_or_update_agent(self.context, agent)
-        self.test_agent = status['id']
         mock.patch.object(self.mech_driver, 'ping_all_chassis',
                           return_value=False).start()
 
-    def test_agent_show_non_ovn(self):
-        self.assertTrue(self.plugin.get_agent(self.context, self.test_agent))
+        metadata_agent_id = uuidutils.generate_uuid()
+        # To be *mostly* sure the agent cache has been updated, we need to
+        # wait for the Chassis events to run. So add a new event that should
+        # run afterthey do and wait for it. I've only had to do this when
+        # adding *a bunch* of Chassis at a time, but better safe than sorry.
+        chassis_name = uuidutils.generate_uuid()
+        agent_event = AgentWaitEvent(self.mech_driver, [chassis_name])
+        self.sb_api.idl.notify_handler.watch_event(agent_event)
+
+        self.chassis = self.add_fake_chassis(
+            self.host, name=chassis_name,
+            external_ids={
+                ovn_const.OVN_AGENT_METADATA_ID_KEY: metadata_agent_id})
+
+        self.assertTrue(agent_event.wait())
+
+        self.agent_types = {
+            self.TEST_AGENT: self._create_test_agent(),
+            ovn_const.OVN_CONTROLLER_AGENT: self.chassis,
+            ovn_const.OVN_METADATA_AGENT: metadata_agent_id,
+        }
+
+    def _create_test_agent(self):
+        agent = {'agent_type': self.TEST_AGENT, 'binary': '/bin/test',
+                 'host': self.host, 'topic': 'test_topic'}
+        _, status = self.plugin.create_or_update_agent(self.context, agent)
+        return status['id']
 
-    def test_agent_show_ovn_controller(self):
-        self.assertTrue(self.plugin.get_agent(self.context,
-                                              self.controller_agent))
+    def test_agent_show(self):
+        for agent_id in self.agent_types.values():
+            self.assertTrue(self.plugin.get_agent(self.context, agent_id))
+
+    def test_agent_list(self):
+        agent_ids = [a['id'] for a in self.plugin.get_agents(
+            self.context, filters={'host': self.host})]
+        self.assertCountEqual(list(self.agent_types.values()), agent_ids)
diff -Nru neutron-17.1.0/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py neutron-17.1.1/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py
--- neutron-17.1.0/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py	2021-03-13 02:26:48.000000000 +0100
@@ -1025,6 +1025,145 @@
                 floatingips = router_sync_info[0][constants.FLOATINGIP_KEY]
                 self.assertTrue(floatingips[0][constants.DVR_SNAT_BOUND])
 
+    def test_allowed_addr_pairs_delayed_fip_and_update_arp_entry(self):
+        HOST1 = 'host1'
+        helpers.register_l3_agent(
+            host=HOST1, agent_mode=constants.L3_AGENT_MODE_DVR)
+        HOST2 = 'host2'
+        helpers.register_l3_agent(
+            host=HOST2, agent_mode=constants.L3_AGENT_MODE_DVR)
+        router = self._create_router(ha=False)
+        private_net1 = self._make_network(self.fmt, 'net1', True)
+        test_allocation_pools = [{'start': '10.1.0.2',
+                                  'end': '10.1.0.20'}]
+        fixed_vrrp_ip = [{'ip_address': '10.1.0.201'}]
+        kwargs = {'arg_list': (extnet_apidef.EXTERNAL,),
+                  extnet_apidef.EXTERNAL: True}
+        ext_net = self._make_network(self.fmt, '', True, **kwargs)
+        self._make_subnet(
+            self.fmt, ext_net, '10.20.0.1', '10.20.0.0/24',
+            ip_version=constants.IP_VERSION_4, enable_dhcp=True)
+        self.l3_plugin.schedule_router(self.context,
+                                       router['id'],
+                                       candidates=[self.l3_agent])
+
+        # Set gateway to router
+        self.l3_plugin._update_router_gw_info(
+            self.context, router['id'],
+            {'network_id': ext_net['network']['id']})
+        private_subnet1 = self._make_subnet(
+            self.fmt,
+            private_net1,
+            '10.1.0.1',
+            cidr='10.1.0.0/24',
+            ip_version=constants.IP_VERSION_4,
+            allocation_pools=test_allocation_pools,
+            enable_dhcp=True)
+        vrrp_port = self._make_port(
+            self.fmt,
+            private_net1['network']['id'],
+            fixed_ips=fixed_vrrp_ip)
+        allowed_address_pairs = [
+            {'ip_address': '10.1.0.201',
+             'mac_address': vrrp_port['port']['mac_address']}]
+        with self.port(
+                subnet=private_subnet1,
+                device_owner=DEVICE_OWNER_COMPUTE) as int_port,\
+                self.port(subnet=private_subnet1,
+                          device_owner=DEVICE_OWNER_COMPUTE) as int_port2:
+            self.l3_plugin.add_router_interface(
+                self.context, router['id'],
+                {'subnet_id': private_subnet1['subnet']['id']})
+            router_handle = (
+                self.l3_plugin.list_active_sync_routers_on_active_l3_agent(
+                    self.context, self.l3_agent['host'], [router['id']]))
+            self.assertEqual(self.l3_agent['host'],
+                             router_handle[0]['gw_port_host'])
+            with mock.patch.object(self.l3_plugin,
+                                   '_l3_rpc_notifier') as l3_notifier:
+                vm_port = self.core_plugin.update_port(
+                    self.context, int_port['port']['id'],
+                    {'port': {portbindings.HOST_ID: HOST1}})
+                vm_port_mac = vm_port['mac_address']
+                vm_port_fixed_ips = vm_port['fixed_ips']
+                vm_port_subnet_id = vm_port_fixed_ips[0]['subnet_id']
+                vm_arp_table = {
+                    'ip_address': vm_port_fixed_ips[0]['ip_address'],
+                    'mac_address': vm_port_mac,
+                    'subnet_id': vm_port_subnet_id}
+                vm_port2 = self.core_plugin.update_port(
+                    self.context, int_port2['port']['id'],
+                    {'port': {portbindings.HOST_ID: HOST2}})
+                # Now update the VM port with the allowed_address_pair
+                self.core_plugin.update_port(
+                     self.context, vm_port['id'],
+                     {'port': {
+                         'allowed_address_pairs': allowed_address_pairs}})
+                self.core_plugin.update_port(
+                     self.context, vm_port2['id'],
+                     {'port': {
+                         'allowed_address_pairs': allowed_address_pairs}})
+                self.assertEqual(
+                    2, l3_notifier.routers_updated_on_host.call_count)
+                updated_vm_port1 = self.core_plugin.get_port(
+                    self.context, vm_port['id'])
+                updated_vm_port2 = self.core_plugin.get_port(
+                    self.context, vm_port2['id'])
+                expected_allowed_address_pairs = updated_vm_port1.get(
+                    'allowed_address_pairs')
+                self.assertEqual(expected_allowed_address_pairs,
+                                 allowed_address_pairs)
+                expected_allowed_address_pairs_2 = updated_vm_port2.get(
+                    'allowed_address_pairs')
+                self.assertEqual(expected_allowed_address_pairs_2,
+                                 allowed_address_pairs)
+                # Now the VRRP port is attached to the VM port. At this
+                # point, the VRRP port should not have inherited the
+                # port host bindings from the parent VM port.
+                cur_vrrp_port_db = self.core_plugin.get_port(
+                    self.context, vrrp_port['port']['id'])
+                self.assertNotEqual(
+                    cur_vrrp_port_db[portbindings.HOST_ID], HOST1)
+                self.assertNotEqual(
+                    cur_vrrp_port_db[portbindings.HOST_ID], HOST2)
+                # Next we can try to associate the floatingip to the
+                # VRRP port that is already attached to the VM port
+                floating_ip = {'floating_network_id': ext_net['network']['id'],
+                               'router_id': router['id'],
+                               'port_id': vrrp_port['port']['id'],
+                               'tenant_id': vrrp_port['port']['tenant_id']}
+                floating_ip = self.l3_plugin.create_floatingip(
+                    self.context, {'floatingip': floating_ip})
+
+                post_update_vrrp_port_db = self.core_plugin.get_port(
+                    self.context, vrrp_port['port']['id'])
+                vrrp_port_fixed_ips = post_update_vrrp_port_db['fixed_ips']
+                vrrp_port_subnet_id = vrrp_port_fixed_ips[0]['subnet_id']
+                vrrp_arp_table1 = {
+                    'ip_address': vrrp_port_fixed_ips[0]['ip_address'],
+                    'mac_address': vm_port_mac,
+                    'subnet_id': vrrp_port_subnet_id}
+
+                expected_calls = [
+                        mock.call(self.context,
+                                  router['id'], vm_arp_table),
+                        mock.call(self.context,
+                                  router['id'], vrrp_arp_table1)]
+                l3_notifier.add_arp_entry.assert_has_calls(
+                        expected_calls)
+                expected_routers_updated_calls = [
+                        mock.call(self.context, mock.ANY, HOST1),
+                        mock.call(self.context, mock.ANY, HOST2),
+                        mock.call(self.context, mock.ANY, 'host0')]
+                l3_notifier.routers_updated_on_host.assert_has_calls(
+                        expected_routers_updated_calls, any_order=True)
+                self.assertFalse(l3_notifier.routers_updated.called)
+                router_info = (
+                    self.l3_plugin.list_active_sync_routers_on_active_l3_agent(
+                        self.context, self.l3_agent['host'], [router['id']]))
+                floatingips = router_info[0][constants.FLOATINGIP_KEY]
+                self.assertTrue(floatingips[0][constants.DVR_SNAT_BOUND])
+
     def test_dvr_gateway_host_binding_is_set(self):
         router = self._create_router(ha=False)
         private_net1 = self._make_network(self.fmt, 'net1', True)
@@ -1059,6 +1198,110 @@
         self.assertEqual(self.l3_agent['host'],
                          router_handle[0]['gw_port_host'])
 
+    def test_allowed_address_pairs_update_arp_entry(self):
+        HOST1 = 'host1'
+        helpers.register_l3_agent(
+            host=HOST1, agent_mode=constants.L3_AGENT_MODE_DVR)
+        router = self._create_router(ha=False)
+        private_net1 = self._make_network(self.fmt, 'net1', True)
+        test_allocation_pools = [{'start': '10.1.0.2',
+                                  'end': '10.1.0.20'}]
+        fixed_vrrp_ip = [{'ip_address': '10.1.0.201'}]
+        kwargs = {'arg_list': (extnet_apidef.EXTERNAL,),
+                  extnet_apidef.EXTERNAL: True}
+        ext_net = self._make_network(self.fmt, '', True, **kwargs)
+        self._make_subnet(
+            self.fmt, ext_net, '10.20.0.1', '10.20.0.0/24',
+            ip_version=constants.IP_VERSION_4, enable_dhcp=True)
+        self.l3_plugin.schedule_router(self.context,
+                                       router['id'],
+                                       candidates=[self.l3_agent])
+        # Set gateway to router
+        self.l3_plugin._update_router_gw_info(
+            self.context, router['id'],
+            {'network_id': ext_net['network']['id']})
+        private_subnet1 = self._make_subnet(
+            self.fmt,
+            private_net1,
+            '10.1.0.1',
+            cidr='10.1.0.0/24',
+            ip_version=constants.IP_VERSION_4,
+            allocation_pools=test_allocation_pools,
+            enable_dhcp=True)
+        vrrp_port = self._make_port(
+            self.fmt,
+            private_net1['network']['id'],
+            fixed_ips=fixed_vrrp_ip)
+        allowed_address_pairs = [
+            {'ip_address': '10.1.0.201',
+             'mac_address': vrrp_port['port']['mac_address']}]
+        with self.port(
+                subnet=private_subnet1,
+                device_owner=DEVICE_OWNER_COMPUTE) as int_port:
+            self.l3_plugin.add_router_interface(
+                self.context, router['id'],
+                {'subnet_id': private_subnet1['subnet']['id']})
+            router_handle = (
+                self.l3_plugin.list_active_sync_routers_on_active_l3_agent(
+                    self.context, self.l3_agent['host'], [router['id']]))
+            self.assertEqual(self.l3_agent['host'],
+                             router_handle[0]['gw_port_host'])
+            with mock.patch.object(self.l3_plugin,
+                                   '_l3_rpc_notifier') as l3_notifier:
+                vm_port = self.core_plugin.update_port(
+                    self.context, int_port['port']['id'],
+                    {'port': {portbindings.HOST_ID: HOST1}})
+                vm_port_mac = vm_port['mac_address']
+                vm_port_fixed_ips = vm_port['fixed_ips']
+                vm_port_subnet_id = vm_port_fixed_ips[0]['subnet_id']
+                vm_arp_table = {
+                    'ip_address': vm_port_fixed_ips[0]['ip_address'],
+                    'mac_address': vm_port_mac,
+                    'subnet_id': vm_port_subnet_id}
+                self.assertEqual(1, l3_notifier.add_arp_entry.call_count)
+                floating_ip = {'floating_network_id': ext_net['network']['id'],
+                               'router_id': router['id'],
+                               'port_id': vrrp_port['port']['id'],
+                               'tenant_id': vrrp_port['port']['tenant_id']}
+                floating_ip = self.l3_plugin.create_floatingip(
+                    self.context, {'floatingip': floating_ip})
+                vrrp_port_db = self.core_plugin.get_port(
+                    self.context, vrrp_port['port']['id'])
+                self.assertNotEqual(vrrp_port_db[portbindings.HOST_ID], HOST1)
+                # Now update the VM port with the allowed_address_pair
+                self.core_plugin.update_port(
+                     self.context, vm_port['id'],
+                     {'port': {
+                         'allowed_address_pairs': allowed_address_pairs}})
+                updated_vm_port = self.core_plugin.get_port(
+                    self.context, vm_port['id'])
+                expected_allowed_address_pairs = updated_vm_port.get(
+                    'allowed_address_pairs')
+                self.assertEqual(expected_allowed_address_pairs,
+                                 allowed_address_pairs)
+                cur_vrrp_port_db = self.core_plugin.get_port(
+                    self.context, vrrp_port['port']['id'])
+                vrrp_port_fixed_ips = cur_vrrp_port_db['fixed_ips']
+                vrrp_port_subnet_id = vrrp_port_fixed_ips[0]['subnet_id']
+                vrrp_arp_table1 = {
+                    'ip_address': vrrp_port_fixed_ips[0]['ip_address'],
+                    'mac_address': vm_port_mac,
+                    'subnet_id': vrrp_port_subnet_id}
+
+                expected_calls = [
+                        mock.call(self.context,
+                                  router['id'], vm_arp_table),
+                        mock.call(self.context,
+                                  router['id'], vrrp_arp_table1)]
+                l3_notifier.add_arp_entry.assert_has_calls(
+                        expected_calls)
+                expected_routers_updated_calls = [
+                        mock.call(self.context, mock.ANY, HOST1),
+                        mock.call(self.context, mock.ANY, 'host0')]
+                l3_notifier.routers_updated_on_host.assert_has_calls(
+                        expected_routers_updated_calls)
+                self.assertFalse(l3_notifier.routers_updated.called)
+
     def test_update_vm_port_host_router_update(self):
         # register l3 agents in dvr mode in addition to existing dvr_snat agent
         HOST1 = 'host1'
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/dhcp/test_agent.py neutron-17.1.1/neutron/tests/unit/agent/dhcp/test_agent.py
--- neutron-17.1.0/neutron/tests/unit/agent/dhcp/test_agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/dhcp/test_agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -15,6 +15,7 @@
 
 import collections
 import copy
+import datetime
 import sys
 from unittest import mock
 import uuid
@@ -2376,3 +2377,63 @@
         self.assertEqual(2, device.route.get_gateway.call_count)
         self.assertFalse(device.route.delete_gateway.called)
         device.route.add_gateway.assert_has_calls(expected)
+
+
+class TestDHCPResourceUpdate(base.BaseTestCase):
+
+    date1 = datetime.datetime(year=2021, month=2, day=1, hour=9, minute=1,
+                              second=2)
+    date2 = datetime.datetime(year=2021, month=2, day=1, hour=9, minute=1,
+                              second=1)  # older than date1
+
+    def test__lt__no_port_event(self):
+        # Lower numerical priority always gets precedence. DHCPResourceUpdate
+        # (and ResourceUpdate) objects with more precedence will return as
+        # "lower" in a "__lt__" method comparison.
+        update1 = dhcp_agent.DHCPResourceUpdate('id1', 5, obj_type='network')
+        update2 = dhcp_agent.DHCPResourceUpdate('id2', 6, obj_type='network')
+        self.assertLess(update1, update2)
+
+    def test__lt__no_port_event_timestamp(self):
+        update1 = dhcp_agent.DHCPResourceUpdate(
+            'id1', 5, timestamp=self.date1, obj_type='network')
+        update2 = dhcp_agent.DHCPResourceUpdate(
+            'id2', 6, timestamp=self.date2, obj_type='network')
+        self.assertLess(update1, update2)
+
+    def test__lt__port_no_fixed_ips(self):
+        update1 = dhcp_agent.DHCPResourceUpdate(
+            'id1', 5, timestamp=self.date1, resource={}, obj_type='port')
+        update2 = dhcp_agent.DHCPResourceUpdate(
+            'id2', 6, timestamp=self.date2, resource={}, obj_type='port')
+        self.assertLess(update1, update2)
+
+    def test__lt__port_fixed_ips_not_matching(self):
+        resource1 = {'fixed_ips': [
+            {'subnet_id': 'subnet1', 'ip_address': '10.0.0.1'}]}
+        resource2 = {'fixed_ips': [
+            {'subnet_id': 'subnet1', 'ip_address': '10.0.0.2'},
+            {'subnet_id': 'subnet2', 'ip_address': '10.0.1.1'}]}
+        update1 = dhcp_agent.DHCPResourceUpdate(
+            'id1', 5, timestamp=self.date1, resource=resource1,
+            obj_type='port')
+        update2 = dhcp_agent.DHCPResourceUpdate(
+            'id2', 6, timestamp=self.date2, resource=resource2,
+            obj_type='port')
+        self.assertLess(update1, update2)
+
+    def test__lt__port_fixed_ips_matching(self):
+        resource1 = {'fixed_ips': [
+            {'subnet_id': 'subnet1', 'ip_address': '10.0.0.1'}]}
+        resource2 = {'fixed_ips': [
+            {'subnet_id': 'subnet1', 'ip_address': '10.0.0.1'},
+            {'subnet_id': 'subnet2', 'ip_address': '10.0.0.2'}]}
+        update1 = dhcp_agent.DHCPResourceUpdate(
+            'id1', 5, timestamp=self.date1, resource=resource1,
+            obj_type='port')
+        update2 = dhcp_agent.DHCPResourceUpdate(
+            'id2', 6, timestamp=self.date2, resource=resource2,
+            obj_type='port')
+        # In this case, both "port" events have matching IPs. "__lt__" method
+        # uses the timestamp: date2 < date1
+        self.assertLess(update2, update1)
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/l3/test_dvr_local_router.py neutron-17.1.1/neutron/tests/unit/agent/l3/test_dvr_local_router.py
--- neutron-17.1.0/neutron/tests/unit/agent/l3/test_dvr_local_router.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/l3/test_dvr_local_router.py	2021-03-13 02:26:48.000000000 +0100
@@ -861,7 +861,7 @@
         fip = {'id': _uuid()}
         fip_cidr = '11.22.33.44/24'
 
-        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri.is_router_primary = mock.Mock(return_value=False)
         ri._add_vip = mock.Mock()
         interface_name = ri.get_snat_external_device_interface_name(
@@ -872,7 +872,7 @@
 
         router[lib_constants.HA_INTERFACE_KEY]['status'] = 'DOWN'
         self._set_ri_kwargs(agent, router['id'], router)
-        ri_1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri_1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri_1.is_router_primary = mock.Mock(return_value=True)
         ri_1._add_vip = mock.Mock()
         interface_name = ri_1.get_snat_external_device_interface_name(
@@ -883,7 +883,7 @@
 
         router[lib_constants.HA_INTERFACE_KEY]['status'] = 'ACTIVE'
         self._set_ri_kwargs(agent, router['id'], router)
-        ri_2 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri_2 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri_2.is_router_primary = mock.Mock(return_value=True)
         ri_2._add_vip = mock.Mock()
         interface_name = ri_2.get_snat_external_device_interface_name(
@@ -905,14 +905,14 @@
         self._set_ri_kwargs(agent, router['id'], router)
         fip_cidr = '11.22.33.44/24'
 
-        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri.is_router_primary = mock.Mock(return_value=False)
         ri._remove_vip = mock.Mock()
         ri.remove_centralized_floatingip(fip_cidr)
         ri._remove_vip.assert_called_once_with(fip_cidr)
         super_remove_centralized_floatingip.assert_not_called()
 
-        ri1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri1.is_router_primary = mock.Mock(return_value=True)
         ri1._remove_vip = mock.Mock()
         ri1.remove_centralized_floatingip(fip_cidr)
@@ -928,10 +928,9 @@
         router[lib_constants.HA_INTERFACE_KEY]['status'] = 'ACTIVE'
         self.mock_driver.unplug.reset_mock()
         self._set_ri_kwargs(agent, router['id'], router)
-        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri._ha_state_path = self.get_temp_file_path('router_ha_state')
         ri._create_snat_namespace = mock.Mock()
-        ri.update_initial_state = mock.Mock()
         ri._plug_external_gateway = mock.Mock()
         ri.initialize(mock.Mock())
         ri._create_dvr_gateway(mock.Mock(), mock.Mock())
@@ -947,14 +946,13 @@
         self.mock_driver.unplug.reset_mock()
         self._set_ri_kwargs(agent, router['id'], router)
 
-        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
+        ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, **self.ri_kwargs)
         ri._ha_state_path = self.get_temp_file_path('router_ha_state')
 
         with open(ri._ha_state_path, "w") as f:
             f.write("primary")
 
         ri._create_snat_namespace = mock.Mock()
-        ri.update_initial_state = mock.Mock()
         ri._plug_external_gateway = mock.Mock()
         with mock.patch("neutron.agent.linux.keepalived."
                         "KeepalivedManager.check_processes",
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/l3/test_ha_router.py neutron-17.1.1/neutron/tests/unit/agent/l3/test_ha_router.py
--- neutron-17.1.0/neutron/tests/unit/agent/l3/test_ha_router.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/l3/test_ha_router.py	2021-03-13 02:26:48.000000000 +0100
@@ -40,8 +40,7 @@
             router = mock.MagicMock()
         self.agent_conf = mock.Mock()
         self.router_id = _uuid()
-        return ha_router.HaRouter(mock.sentinel.enqueue_state,
-                                  mock.sentinel.agent,
+        return ha_router.HaRouter(mock.sentinel.agent,
                                   self.router_id,
                                   router,
                                   self.agent_conf,
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/linux/openvswitch_firewall/test_firewall.py neutron-17.1.1/neutron/tests/unit/agent/linux/openvswitch_firewall/test_firewall.py
--- neutron-17.1.0/neutron/tests/unit/agent/linux/openvswitch_firewall/test_firewall.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/linux/openvswitch_firewall/test_firewall.py	2021-03-13 02:26:48.000000000 +0100
@@ -24,6 +24,7 @@
 
 from neutron.agent.common import ovs_lib
 from neutron.agent.common import utils
+from neutron.agent import firewall as agent_firewall
 from neutron.agent.linux.openvswitch_firewall import constants as ovsfw_consts
 from neutron.agent.linux.openvswitch_firewall import exceptions
 from neutron.agent.linux.openvswitch_firewall import firewall as ovsfw
@@ -40,8 +41,14 @@
 
 def create_ofport(port_dict, network_type=None,
                   physical_network=None, segment_id=TESTING_SEGMENT):
+    allowed_pairs_v4 = ovsfw.OFPort._get_allowed_pairs(
+        port_dict, version=constants.IPv4)
+    allowed_pairs_v6 = ovsfw.OFPort._get_allowed_pairs(
+        port_dict, version=constants.IPv6)
     ovs_port = mock.Mock(vif_mac='00:00:00:00:00:00', ofport=1,
-                         port_name="port-name")
+                         port_name="port-name",
+                         allowed_pairs_v4=allowed_pairs_v4,
+                         allowed_pairs_v6=allowed_pairs_v6)
     return ovsfw.OFPort(port_dict, ovs_port, vlan_tag=TESTING_VLAN_TAG,
                         segment_id=segment_id,
                         network_type=network_type,
@@ -440,6 +447,9 @@
             ovs_lib, 'OVSBridge', autospec=True).start()
         securitygroups_rpc.register_securitygroups_opts()
         self.firewall = ovsfw.OVSFirewallDriver(mock_bridge)
+        self.delete_invalid_conntrack_entries_mock = mock.patch.object(
+            self.firewall.ipconntrack,
+            "delete_conntrack_state_by_remote_ips").start()
         self.mock_bridge = self.firewall.int_br
         self.mock_bridge.reset_mock()
         self.fake_ovs_port = FakeOVSPort('port', 1, '00:00:00:00:00:00')
@@ -464,6 +474,16 @@
              'direction': constants.EGRESS_DIRECTION}]
         self.firewall.update_security_group_rules(2, security_group_rules)
 
+    def _assert_invalid_conntrack_entries_deleted(self, port_dict):
+        port_dict['of_port'] = mock.Mock(vlan_tag=10)
+        self.delete_invalid_conntrack_entries_mock.assert_has_calls([
+            mock.call(
+                [port_dict], constants.IPv4, set(),
+                mark=ovsfw_consts.CT_MARK_INVALID),
+            mock.call(
+                [port_dict], constants.IPv6, set(),
+                mark=ovsfw_consts.CT_MARK_INVALID)])
+
     @property
     def port_ofport(self):
         return self.mock_bridge.br.get_vif_port_by_id.return_value.ofport
@@ -621,6 +641,7 @@
         calls = self.mock_bridge.br.add_flow.call_args_list
         for call in exp_ingress_classifier, exp_egress_classifier, filter_rule:
             self.assertIn(call, calls)
+        self._assert_invalid_conntrack_entries_deleted(port_dict)
 
     def test_prepare_port_filter_port_security_disabled(self):
         port_dict = {'device': 'port-id',
@@ -631,6 +652,7 @@
                 self.firewall, 'initialize_port_flows') as m_init_flows:
             self.firewall.prepare_port_filter(port_dict)
         self.assertFalse(m_init_flows.called)
+        self.delete_invalid_conntrack_entries_mock.assert_not_called()
 
     def _test_initialize_port_flows_dvr_conntrack_direct(self, network_type):
         port_dict = {
@@ -802,6 +824,7 @@
         self.assertFalse(self.mock_bridge.br.delete_flows.called)
         self.firewall.prepare_port_filter(port_dict)
         self.assertTrue(self.mock_bridge.br.delete_flows.called)
+        self._assert_invalid_conntrack_entries_deleted(port_dict)
 
     def test_update_port_filter(self):
         port_dict = {'device': 'port-id',
@@ -833,6 +856,7 @@
             table=ovs_consts.RULES_EGRESS_TABLE)]
         self.mock_bridge.br.add_flow.assert_has_calls(
             filter_rules, any_order=True)
+        self._assert_invalid_conntrack_entries_deleted(port_dict)
 
     def test_update_port_filter_create_new_port_if_not_present(self):
         port_dict = {'device': 'port-id',
@@ -852,15 +876,18 @@
         self.assertFalse(self.mock_bridge.br.delete_flows.called)
         self.assertTrue(initialize_port_flows_mock.called)
         self.assertTrue(add_flows_from_rules_mock.called)
+        self._assert_invalid_conntrack_entries_deleted(port_dict)
 
     def test_update_port_filter_port_security_disabled(self):
         port_dict = {'device': 'port-id',
                      'security_groups': [1]}
         self._prepare_security_group()
         self.firewall.prepare_port_filter(port_dict)
+        self.delete_invalid_conntrack_entries_mock.reset_mock()
         port_dict['port_security_enabled'] = False
         self.firewall.update_port_filter(port_dict)
         self.assertTrue(self.mock_bridge.br.delete_flows.called)
+        self.delete_invalid_conntrack_entries_mock.assert_not_called()
 
     def test_update_port_filter_applies_added_flows(self):
         """Check flows are applied right after _set_flows is called."""
@@ -881,6 +908,7 @@
         self.mock_bridge.br.get_vif_port_by_id.return_value = None
         self.firewall.update_port_filter(port_dict)
         self.assertTrue(self.mock_bridge.br.delete_flows.called)
+        self._assert_invalid_conntrack_entries_deleted(port_dict)
 
     def test_remove_port_filter(self):
         port_dict = {'device': 'port-id',
@@ -981,6 +1009,38 @@
         with testtools.ExpectedException(exceptions.OVSFWPortNotHandled):
             self.firewall._remove_egress_no_port_security('foo')
 
+    def test__initialize_egress_ipv6_icmp(self):
+        port_dict = {
+            'device': 'port-id',
+            'security_groups': [1],
+            'fixed_ips': ["10.0.0.1"],
+            'allowed_address_pairs': [
+                {'mac_address': 'aa:bb:cc:dd:ee:ff',
+                 'ip_address': '192.168.1.1'},
+                {'mac_address': 'aa:bb:cc:dd:ee:ff',
+                 'ip_address': '2003::1'}
+            ]}
+        of_port = create_ofport(port_dict)
+        self.mock_bridge.br.db_get_val.return_value = {'tag': TESTING_VLAN_TAG}
+        self.firewall._initialize_egress_ipv6_icmp(
+            of_port, set([('aa:bb:cc:dd:ee:ff', '2003::1')]))
+        expected_calls = []
+        for icmp_type in agent_firewall.ICMPV6_ALLOWED_EGRESS_TYPES:
+            expected_calls.append(
+                mock.call(
+                    table=ovs_consts.BASE_EGRESS_TABLE,
+                    priority=95,
+                    in_port=TESTING_VLAN_TAG,
+                    reg5=TESTING_VLAN_TAG,
+                    dl_type='0x86dd',
+                    nw_proto=constants.PROTO_NUM_IPV6_ICMP,
+                    icmp_type=icmp_type,
+                    dl_src='aa:bb:cc:dd:ee:ff',
+                    ipv6_src='2003::1',
+                    actions='resubmit(,%d)' % (
+                        ovs_consts.ACCEPTED_EGRESS_TRAFFIC_NORMAL_TABLE)))
+        self.mock_bridge.br.add_flow.assert_has_calls(expected_calls)
+
     def test_process_trusted_ports_caches_port_id(self):
         vif_port = ovs_lib.VifPort('name', 1, 'id', 'mac', mock.ANY)
         with mock.patch.object(self.firewall.int_br.br, 'get_vifs_by_ids',
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/linux/test_dhcp.py neutron-17.1.1/neutron/tests/unit/agent/linux/test_dhcp.py
--- neutron-17.1.0/neutron/tests/unit/agent/linux/test_dhcp.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/linux/test_dhcp.py	2021-03-13 02:26:48.000000000 +0100
@@ -308,6 +308,19 @@
                                for ip in self.fixed_ips]
 
 
+class FakeRouterHAPort(object):
+    def __init__(self):
+        self.id = 'hahahaha-haha-haha-haha-hahahahahaha'
+        self.admin_state_up = True
+        self.device_owner = constants.DEVICE_OWNER_ROUTER_HA_INTF
+        self.mac_address = '00:00:0f:aa:aa:aa'
+        self.device_id = 'fake_router_ha_port'
+        self.dns_assignment = []
+        self.extra_dhcp_opts = []
+        self.fixed_ips = [FakeIPAllocation(
+            '169.254.169.20', 'dddddddd-dddd-dddd-dddd-dddddddddddd')]
+
+
 class FakeRouterPortNoDHCP(object):
     def __init__(self, dev_owner=constants.DEVICE_OWNER_ROUTER_INTF,
                  ip_address='192.168.0.1', domain='openstacklocal'):
@@ -694,6 +707,7 @@
         self.namespace = 'qdhcp-ns'
         self.ports = [FakePort1(domain=domain), FakeV6Port(domain=domain),
                       FakeDualPort(domain=domain),
+                      FakeRouterHAPort(),
                       FakeRouterPort(domain=domain)]
 
 
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/linux/test_ip_conntrack.py neutron-17.1.1/neutron/tests/unit/agent/linux/test_ip_conntrack.py
--- neutron-17.1.0/neutron/tests/unit/agent/linux/test_ip_conntrack.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/linux/test_ip_conntrack.py	2021-03-13 02:26:48.000000000 +0100
@@ -39,3 +39,26 @@
         dev_info_list = [dev_info for _ in range(10)]
         self.mgr._delete_conntrack_state(dev_info_list, rule)
         self.assertEqual(1, len(self.execute.mock_calls))
+
+
+class OvsIPConntrackTestCase(IPConntrackTestCase):
+
+    def setUp(self):
+        super(IPConntrackTestCase, self).setUp()
+        self.execute = mock.Mock()
+        self.mgr = ip_conntrack.OvsIpConntrackManager(self.execute)
+
+    def test_delete_conntrack_state_dedupes(self):
+        rule = {'ethertype': 'IPv4', 'direction': 'ingress'}
+        dev_info = {
+            'device': 'tapdevice',
+            'fixed_ips': ['1.2.3.4'],
+            'of_port': mock.Mock(of_port=10)}
+        dev_info_list = [dev_info for _ in range(10)]
+        self.mgr._delete_conntrack_state(dev_info_list, rule)
+        self.assertEqual(1, len(self.execute.mock_calls))
+
+    def test_get_device_zone(self):
+        of_port = mock.Mock(vlan_tag=10)
+        port = {'id': 'port-id', 'of_port': of_port}
+        self.assertEqual(10, self.mgr.get_device_zone(port))
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/linux/test_ip_lib.py neutron-17.1.1/neutron/tests/unit/agent/linux/test_ip_lib.py
--- neutron-17.1.0/neutron/tests/unit/agent/linux/test_ip_lib.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/linux/test_ip_lib.py	2021-03-13 02:26:48.000000000 +0100
@@ -24,7 +24,6 @@
 from oslo_utils import netutils
 from oslo_utils import uuidutils
 import pyroute2
-from pyroute2.netlink.rtnl import ifaddrmsg
 from pyroute2.netlink.rtnl import ifinfmsg
 from pyroute2.netlink.rtnl import ndmsg
 from pyroute2 import NetlinkError
@@ -1603,39 +1602,6 @@
         self.assertEqual(reference, retval)
 
 
-class ParseLinkDeviceTestCase(base.BaseTestCase):
-
-    def setUp(self):
-        super(ParseLinkDeviceTestCase, self).setUp()
-        self._mock_get_ip_addresses = mock.patch.object(priv_lib,
-                                                        'get_ip_addresses')
-        self.mock_get_ip_addresses = self._mock_get_ip_addresses.start()
-        self.addCleanup(self._stop_mock)
-
-    def _stop_mock(self):
-        self._mock_get_ip_addresses.stop()
-
-    def test_parse_link_devices(self):
-        device = ({'index': 1, 'attrs': [['IFLA_IFNAME', 'int_name']]})
-        self.mock_get_ip_addresses.return_value = [
-            {'prefixlen': 24, 'scope': 200, 'event': 'RTM_NEWADDR', 'attrs': [
-                ['IFA_ADDRESS', '192.168.10.20'],
-                ['IFA_FLAGS', ifaddrmsg.IFA_F_PERMANENT]]},
-            {'prefixlen': 64, 'scope': 200, 'event': 'RTM_DELADDR', 'attrs': [
-                ['IFA_ADDRESS', '2001:db8::1'],
-                ['IFA_FLAGS', ifaddrmsg.IFA_F_PERMANENT]]}]
-
-        retval = ip_lib._parse_link_device('namespace', device)
-        expected = [{'scope': 'site', 'cidr': '192.168.10.20/24',
-                     'dynamic': False, 'dadfailed': False, 'name': 'int_name',
-                     'broadcast': None, 'tentative': False, 'event': 'added'},
-                    {'scope': 'site', 'cidr': '2001:db8::1/64',
-                     'dynamic': False, 'dadfailed': False, 'name': 'int_name',
-                     'broadcast': None, 'tentative': False,
-                     'event': 'removed'}]
-        self.assertEqual(expected, retval)
-
-
 class GetDevicesInfoTestCase(base.BaseTestCase):
 
     DEVICE_LO = {
diff -Nru neutron-17.1.0/neutron/tests/unit/agent/ovn/metadata/test_agent.py neutron-17.1.1/neutron/tests/unit/agent/ovn/metadata/test_agent.py
--- neutron-17.1.0/neutron/tests/unit/agent/ovn/metadata/test_agent.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/agent/ovn/metadata/test_agent.py	2021-03-13 02:26:48.000000000 +0100
@@ -235,7 +235,9 @@
                     'update_chassis_metadata_networks') as update_chassis,\
                 mock.patch.object(
                     driver.MetadataDriver,
-                    'spawn_monitored_metadata_proxy') as spawn_mdp:
+                    'spawn_monitored_metadata_proxy') as spawn_mdp, \
+                mock.patch.object(
+                    self.agent, '_ensure_datapath_checksum') as mock_checksum:
 
             # Simulate that the VETH pair was already present in 'br-fake'.
             # We need to assert that it was deleted first.
@@ -268,6 +270,7 @@
                 bind_address=n_const.METADATA_V4_IP, network_id='1')
             # Check that the chassis has been updated with the datapath.
             update_chassis.assert_called_once_with('1')
+            mock_checksum.assert_called_once_with('namespace')
 
     def _test_update_chassis_metadata_networks_helper(
             self, dp, remove, expected_dps, txn_called=True):
diff -Nru neutron-17.1.0/neutron/tests/unit/api/rpc/agentnotifiers/test_dhcp_rpc_agent_api.py neutron-17.1.1/neutron/tests/unit/api/rpc/agentnotifiers/test_dhcp_rpc_agent_api.py
--- neutron-17.1.0/neutron/tests/unit/api/rpc/agentnotifiers/test_dhcp_rpc_agent_api.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/api/rpc/agentnotifiers/test_dhcp_rpc_agent_api.py	2021-03-13 02:26:48.000000000 +0100
@@ -247,7 +247,9 @@
         self._test__notify_agents_with_function(
             lambda: self.notifier._after_router_interface_deleted(
                 mock.ANY, mock.ANY, mock.ANY, context=mock.Mock(),
-                port={'id': 'foo_port_id', 'network_id': 'foo_network_id'}),
+                port={'id': 'foo_port_id', 'network_id': 'foo_network_id',
+                      'fixed_ips': {'subnet_id': 'subnet1',
+                                    'ip_address': '10.0.0.1'}}),
             expected_scheduling=0, expected_casts=1)
 
     def test__fanout_message(self):
diff -Nru neutron-17.1.0/neutron/tests/unit/db/test_db_base_plugin_v2.py neutron-17.1.1/neutron/tests/unit/db/test_db_base_plugin_v2.py
--- neutron-17.1.0/neutron/tests/unit/db/test_db_base_plugin_v2.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/db/test_db_base_plugin_v2.py	2021-03-13 02:26:48.000000000 +0100
@@ -1611,17 +1611,25 @@
             res = req.get_response(self.api)
             self.assertEqual(webob.exc.HTTPConflict.code, res.status_int)
 
-    def test_delete_network_port_exists_owned_by_network(self):
+    def _test_delete_network_port_exists_owned_by_network(self, device_owner):
         res = self._create_network(fmt=self.fmt, name='net',
                                    admin_state_up=True)
         network = self.deserialize(self.fmt, res)
         network_id = network['network']['id']
         self._create_port(self.fmt, network_id,
-                          device_owner=constants.DEVICE_OWNER_DHCP)
+                          device_owner=device_owner)
         req = self.new_delete_request('networks', network_id)
         res = req.get_response(self.api)
         self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int)
 
+    def test_test_delete_network_port_exists_dhcp(self):
+        self._test_delete_network_port_exists_owned_by_network(
+            constants.DEVICE_OWNER_DHCP)
+
+    def test_test_delete_network_port_exists_fip_gw(self):
+        self._test_delete_network_port_exists_owned_by_network(
+            constants.DEVICE_OWNER_AGENT_GW)
+
     def test_delete_network_port_exists_owned_by_network_race(self):
         res = self._create_network(fmt=self.fmt, name='net',
                                    admin_state_up=True)
diff -Nru neutron-17.1.0/neutron/tests/unit/db/test_l3_dvr_db.py neutron-17.1.1/neutron/tests/unit/db/test_l3_dvr_db.py
--- neutron-17.1.0/neutron/tests/unit/db/test_l3_dvr_db.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/db/test_l3_dvr_db.py	2021-03-13 02:26:48.000000000 +0100
@@ -1173,6 +1173,72 @@
                                                 port=mock.ANY,
                                                 interface_info=interface_info)
 
+    def test__generate_arp_table_and_notify_agent(self):
+        fixed_ip = {
+            'ip_address': '1.2.3.4',
+            'subnet_id': _uuid()}
+        mac_address = "00:11:22:33:44:55"
+        expected_arp_table = {
+            'ip_address': fixed_ip['ip_address'],
+            'subnet_id': fixed_ip['subnet_id'],
+            'mac_address': mac_address}
+        notifier = mock.Mock()
+        ports = [{'id': _uuid(), 'device_id': 'router_1'},
+                 {'id': _uuid(), 'device_id': 'router_2'}]
+        with mock.patch.object(self.core_plugin, "get_ports",
+                               return_value=ports):
+            self.mixin._generate_arp_table_and_notify_agent(
+                self.ctx, fixed_ip, mac_address, notifier)
+        notifier.assert_has_calls([
+            mock.call(self.ctx, "router_1", expected_arp_table),
+            mock.call(self.ctx, "router_2", expected_arp_table)])
+
+    def _test_update_arp_entry_for_dvr_service_port(
+            self, device_owner, action):
+        router_dict = {'name': 'test_router', 'admin_state_up': True,
+                       'distributed': True}
+        router = self._create_router(router_dict)
+        plugin = mock.Mock()
+        directory.add_plugin(plugin_constants.CORE, plugin)
+        l3_notify = self.mixin.l3_rpc_notifier = mock.Mock()
+        port = {
+            'id': 'my_port_id',
+            'fixed_ips': [
+                {'subnet_id': '51edc9e0-24f9-47f2-8e1e-2a41cb691323',
+                 'ip_address': '10.0.0.11'},
+                {'subnet_id': '2b7c8a07-6f8e-4937-8701-f1d5da1a807c',
+                 'ip_address': '10.0.0.21'},
+                {'subnet_id': '48534187-f077-4e81-93ff-81ec4cc0ad3b',
+                 'ip_address': 'fd45:1515:7e0:0:f816:3eff:fe1a:1111'}],
+            'mac_address': 'my_mac',
+            'device_owner': device_owner
+        }
+        dvr_port = {
+            'id': 'dvr_port_id',
+            'fixed_ips': mock.ANY,
+            'device_owner': const.DEVICE_OWNER_DVR_INTERFACE,
+            'device_id': router['id']
+        }
+        plugin.get_ports.return_value = [dvr_port]
+        if action == 'add':
+            self.mixin.update_arp_entry_for_dvr_service_port(
+                self.ctx, port)
+            self.assertEqual(3, l3_notify.add_arp_entry.call_count)
+        elif action == 'del':
+            self.mixin.delete_arp_entry_for_dvr_service_port(
+                self.ctx, port)
+            self.assertEqual(3, l3_notify.del_arp_entry.call_count)
+
+    def test_update_arp_entry_for_dvr_service_port_added(self):
+        action = 'add'
+        device_owner = const.DEVICE_OWNER_LOADBALANCER
+        self._test_update_arp_entry_for_dvr_service_port(device_owner, action)
+
+    def test_update_arp_entry_for_dvr_service_port_deleted(self):
+        action = 'del'
+        device_owner = const.DEVICE_OWNER_LOADBALANCER
+        self._test_update_arp_entry_for_dvr_service_port(device_owner, action)
+
     def test_add_router_interface_csnat_ports_failure(self):
         router_dict = {'name': 'test_router', 'admin_state_up': True,
                        'distributed': True}
diff -Nru neutron-17.1.0/neutron/tests/unit/db/test_securitygroups_db.py neutron-17.1.1/neutron/tests/unit/db/test_securitygroups_db.py
--- neutron-17.1.0/neutron/tests/unit/db/test_securitygroups_db.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/db/test_securitygroups_db.py	2021-03-13 02:26:48.000000000 +0100
@@ -77,6 +77,10 @@
         self.mock_quota_make_res = make_res.start()
         commit_res = mock.patch.object(quota.QuotaEngine, 'commit_reservation')
         self.mock_quota_commit_res = commit_res.start()
+        is_ext_supported = mock.patch(
+            'neutron_lib.api.extensions.is_extension_supported')
+        self.is_ext_supported = is_ext_supported.start()
+        self.is_ext_supported.return_value = True
 
     def test_create_security_group_conflict(self):
         with mock.patch.object(registry, "publish") as mock_publish:
@@ -601,3 +605,13 @@
             get_default_sg_id.assert_has_calls([
                 mock.call(self.ctx, 'tenant_1'),
                 mock.call(self.ctx, 'tenant_1')])
+
+    def test__ensure_default_security_group_when_disabled(self):
+        with mock.patch.object(
+                    self.mixin, '_get_default_sg_id') as get_default_sg_id,\
+                mock.patch.object(
+                        self.mixin, 'create_security_group') as create_sg:
+            self.is_ext_supported.return_value = False
+            self.mixin._ensure_default_security_group(self.ctx, 'tenant_1')
+            create_sg.assert_not_called()
+            get_default_sg_id.assert_not_called()
diff -Nru neutron-17.1.0/neutron/tests/unit/extensions/test_floating_ip_port_forwarding.py neutron-17.1.1/neutron/tests/unit/extensions/test_floating_ip_port_forwarding.py
--- neutron-17.1.0/neutron/tests/unit/extensions/test_floating_ip_port_forwarding.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/extensions/test_floating_ip_port_forwarding.py	2021-03-13 02:26:48.000000000 +0100
@@ -71,6 +71,21 @@
 
         return fip_pf_req.get_response(self.ext_api)
 
+    def _update_fip_port_forwarding(self, fmt, floating_ip_id,
+                                    port_forwarding_id, **kwargs):
+        port_forwarding = {}
+        for k, v in kwargs.items():
+            port_forwarding[k] = v
+        data = {'port_forwarding': port_forwarding}
+
+        fip_pf_req = self._req(
+            'PUT', 'floatingips', data,
+            fmt or self.fmt, id=floating_ip_id,
+            sub_id=port_forwarding_id,
+            subresource='port_forwardings')
+
+        return fip_pf_req.get_response(self.ext_api)
+
     def test_create_floatingip_port_forwarding_with_port_number_0(self):
         with self.network() as ext_net:
             network_id = ext_net['network']['id']
@@ -136,3 +151,46 @@
                 pf_body = self.deserialize(self.fmt, res)
                 self.assertEqual(
                     "blablablabla", pf_body['port_forwarding']['description'])
+
+    def test_update_floatingip_port_forwarding_with_dup_internal_port(self):
+        with self.network() as ext_net:
+            network_id = ext_net['network']['id']
+            self._set_net_external(network_id)
+            with self.subnet(ext_net, cidr='10.10.10.0/24'), \
+                    self.router() as router, \
+                    self.subnet(cidr='11.0.0.0/24') as private_subnet, \
+                    self.port(private_subnet) as port:
+                self._add_external_gateway_to_router(
+                    router['router']['id'],
+                    network_id)
+                self._router_interface_action(
+                    'add', router['router']['id'],
+                    private_subnet['subnet']['id'],
+                    None)
+                fip1 = self._make_floatingip(
+                    self.fmt,
+                    network_id)
+                self.assertIsNone(fip1['floatingip'].get('port_id'))
+                self._create_fip_port_forwarding(
+                    self.fmt, fip1['floatingip']['id'],
+                    2222, 22,
+                    'tcp',
+                    port['port']['fixed_ips'][0]['ip_address'],
+                    port['port']['id'],
+                    description="blablablabla")
+                fip2 = self._make_floatingip(
+                    self.fmt,
+                    network_id)
+                fip_pf_response = self._create_fip_port_forwarding(
+                    self.fmt, fip2['floatingip']['id'],
+                    2222, 23,
+                    'tcp',
+                    port['port']['fixed_ips'][0]['ip_address'],
+                    port['port']['id'],
+                    description="blablablabla")
+                update_res = self._update_fip_port_forwarding(
+                    self.fmt, fip2['floatingip']['id'],
+                    fip_pf_response.json['port_forwarding']['id'],
+                    **{'internal_port': 22})
+                self.assertEqual(exc.HTTPBadRequest.code,
+                                 update_res.status_int)
diff -Nru neutron-17.1.0/neutron/tests/unit/extensions/test_l3_conntrack_helper.py neutron-17.1.1/neutron/tests/unit/extensions/test_l3_conntrack_helper.py
--- neutron-17.1.0/neutron/tests/unit/extensions/test_l3_conntrack_helper.py	1970-01-01 01:00:00.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/extensions/test_l3_conntrack_helper.py	2021-03-13 02:26:48.000000000 +0100
@@ -0,0 +1,141 @@
+# Copyright 2021 Troila
+# All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from unittest import mock
+
+from webob import exc
+
+from neutron_lib.api.definitions import l3 as l3_apidef
+from neutron_lib.api.definitions import l3_conntrack_helper as l3_ct
+from neutron_lib import context
+from oslo_utils import uuidutils
+
+from neutron.extensions import l3
+from neutron.extensions import l3_conntrack_helper
+from neutron.tests.unit.api import test_extensions
+from neutron.tests.unit.extensions import test_l3
+
+_uuid = uuidutils.generate_uuid
+
+
+class TestL3ConntrackHelperServicePlugin(test_l3.TestL3NatServicePlugin):
+    supported_extension_aliases = [l3_apidef.ALIAS, l3_ct.ALIAS]
+
+
+class ExtendL3ConntrackHelperExtensionManager(object):
+
+    def get_resources(self):
+        return (l3.L3.get_resources() +
+                l3_conntrack_helper.L3_conntrack_helper.get_resources())
+
+    def get_actions(self):
+        return []
+
+    def get_request_extensions(self):
+        return []
+
+
+class L3NConntrackHelperTestCase(test_l3.L3BaseForIntTests,
+                                 test_l3.L3NatTestCaseMixin):
+    tenant_id = _uuid()
+    fmt = "json"
+
+    def setUp(self):
+        mock.patch('neutron.api.rpc.handlers.resources_rpc.'
+                   'ResourcesPushRpcApi').start()
+        svc_plugins = ('neutron.services.conntrack_helper.plugin.Plugin',
+                       'neutron.tests.unit.extensions.'
+                       'test_l3_conntrack_helper.'
+                       'TestL3ConntrackHelperServicePlugin')
+        plugin = ('neutron.tests.unit.extensions.test_l3.TestL3NatIntPlugin')
+        ext_mgr = ExtendL3ConntrackHelperExtensionManager()
+        super(L3NConntrackHelperTestCase, self).setUp(
+              ext_mgr=ext_mgr, service_plugins=svc_plugins, plugin=plugin)
+        self.ext_api = test_extensions.setup_extensions_middleware(ext_mgr)
+
+    def _create_router_conntrack_helper(self, fmt, router_id,
+                                        protocol, port, helper):
+        tenant_id = self.tenant_id or _uuid()
+        data = {'conntrack_helper': {
+            "protocol": protocol,
+            "port": port,
+            "helper": helper}
+        }
+        router_ct_req = self._req(
+            'POST', 'routers', data,
+            fmt or self.fmt, id=router_id,
+            subresource='conntrack_helpers')
+
+        router_ct_req.environ['neutron.context'] = context.Context(
+            '', tenant_id, is_admin=True)
+
+        return router_ct_req.get_response(self.ext_api)
+
+    def _update_router_conntrack_helper(self, fmt, router_id,
+                                        conntrack_helper_id, **kwargs):
+        conntrack_helper = {}
+        for k, v in kwargs.items():
+            conntrack_helper[k] = v
+        data = {'conntrack_helper': conntrack_helper}
+
+        router_ct_req = self._req(
+            'PUT', 'routers', data,
+            fmt or self.fmt, id=router_id,
+            sub_id=conntrack_helper_id,
+            subresource='conntrack_helpers')
+        return router_ct_req.get_response(self.ext_api)
+
+    def test_create_ct_with_duplicate_entry(self):
+        with self.router() as router:
+            ct1 = self._create_router_conntrack_helper(
+                self.fmt, router['router']['id'],
+                "udp", 69, "tftp")
+            self.assertEqual(exc.HTTPCreated.code, ct1.status_code)
+            ct2 = self._create_router_conntrack_helper(
+                self.fmt, router['router']['id'],
+                "udp", 69, "tftp")
+            self.assertEqual(exc.HTTPBadRequest.code, ct2.status_code)
+            expect_msg = ("Bad conntrack_helper request: A duplicate "
+                          "conntrack helper entry with same attributes "
+                          "already exists, conflicting values are "
+                          "{'router_id': '%s', 'protocol': 'udp', "
+                          "'port': 69, 'helper': "
+                          "'tftp'}.") % router['router']['id']
+            self.assertEqual(
+                expect_msg, ct2.json_body['NeutronError']['message'])
+
+    def test_update_ct_with_duplicate_entry(self):
+        with self.router() as router:
+            ct1 = self._create_router_conntrack_helper(
+                self.fmt, router['router']['id'],
+                "udp", 69, "tftp")
+            self.assertEqual(exc.HTTPCreated.code, ct1.status_code)
+            ct2 = self._create_router_conntrack_helper(
+                self.fmt, router['router']['id'],
+                "udp", 68, "tftp")
+            self.assertEqual(exc.HTTPCreated.code, ct2.status_code)
+            result = self._update_router_conntrack_helper(
+                self.fmt, router['router']['id'],
+                ct1.json['conntrack_helper']['id'],
+                **{'port': 68})
+            self.assertEqual(exc.HTTPBadRequest.code, result.status_code)
+            expect_msg = ("Bad conntrack_helper request: A duplicate "
+                          "conntrack helper entry with same attributes "
+                          "already exists, conflicting values are "
+                          "{'router_id': '%s', 'protocol': 'udp', "
+                          "'port': 68, 'helper': "
+                          "'tftp'}.") % router['router']['id']
+            self.assertEqual(
+                expect_msg, result.json_body['NeutronError']['message'])
diff -Nru neutron-17.1.0/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py neutron-17.1.1/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py
--- neutron-17.1.0/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py	2021-03-13 02:26:48.000000000 +0100
@@ -395,3 +395,40 @@
                       priority=constants.HA_CHASSIS_GROUP_HIGHEST_PRIORITY - 1)
         ]
         nb_idl.ha_chassis_group_add_chassis.assert_has_calls(expected_calls)
+
+    def test_check_for_mcast_flood_reports(self):
+        nb_idl = self.fake_ovn_client._nb_idl
+        lsp0 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp0',
+                   'options': {'mcast_flood_reports': 'true'},
+                   'type': ""})
+        lsp1 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp1', 'options': {}, 'type': ""})
+        lsp2 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp2', 'options': {},
+                   'type': "vtep"})
+        lsp3 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp3', 'options': {},
+                   'type': "localport"})
+        lsp4 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp4', 'options': {},
+                   'type': "router"})
+        lsp5 = fakes.FakeOvsdbRow.create_one_ovsdb_row(
+            attrs={'name': 'lsp5', 'options': {}, 'type': 'localnet'})
+
+        nb_idl.lsp_list.return_value.execute.return_value = [
+            lsp0, lsp1, lsp2, lsp3, lsp4, lsp5]
+
+        # Invoke the periodic method, it meant to run only once at startup
+        # so NeverAgain will be raised at the end
+        self.assertRaises(periodics.NeverAgain,
+                          self.periodic.check_for_mcast_flood_reports)
+
+        # Assert only lsp1 and lsp5 were called because they are the only
+        # ones meeting the criteria ("mcast_flood_reports" not yet set,
+        # and type "" or localnet)
+        expected_calls = [
+            mock.call('lsp1', mcast_flood_reports='true'),
+            mock.call('lsp5', mcast_flood_reports='true', mcast_flood='true')]
+
+        nb_idl.lsp_set_options.assert_has_calls(expected_calls)
diff -Nru neutron-17.1.0/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py neutron-17.1.1/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py
--- neutron-17.1.0/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py	2021-03-13 02:26:48.000000000 +0100
@@ -715,7 +715,9 @@
             external_ids={},
             lport_name=ovn_utils.ovn_provnet_port_name(segments[0]['id']),
             lswitch_name=ovn_utils.ovn_name(net['id']),
-            options={'network_name': 'physnet1'},
+            options={'network_name': 'physnet1',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD: 'true'},
             tag=2,
             type='localnet')
 
@@ -1504,13 +1506,13 @@
             self.mech_driver.update_subnet_postcommit(context)
             esd.assert_called_once_with(
                 context.current, context.network.current, mock.ANY)
-            umd.assert_called_once_with(mock.ANY, 'id')
+            umd.assert_called_once_with(mock.ANY, 'id', subnet_id='subnet_id')
 
     def test_update_subnet_postcommit_disable_dhcp(self):
         self.mech_driver._nb_ovn.get_subnet_dhcp_options.return_value = {
             'subnet': mock.sentinel.subnet, 'ports': []}
         context = fakes.FakeSubnetContext(
-            subnet={'enable_dhcp': False, 'id': 'fake_id', 'ip_version': 4,
+            subnet={'enable_dhcp': False, 'id': 'subnet_id', 'ip_version': 4,
                     'network_id': 'id'},
             network={'id': 'id'})
         with mock.patch.object(
@@ -1521,7 +1523,7 @@
                 'update_metadata_port') as umd:
             self.mech_driver.update_subnet_postcommit(context)
             dsd.assert_called_once_with(context.current['id'], mock.ANY)
-            umd.assert_called_once_with(mock.ANY, 'id')
+            umd.assert_called_once_with(mock.ANY, 'id', subnet_id='subnet_id')
 
     def test_update_subnet_postcommit_update_dhcp(self):
         self.mech_driver._nb_ovn.get_subnet_dhcp_options.return_value = {
@@ -1539,7 +1541,63 @@
             self.mech_driver.update_subnet_postcommit(context)
             usd.assert_called_once_with(
                 context.current, context.network.current, mock.ANY)
-            umd.assert_called_once_with(mock.ANY, 'id')
+            umd.assert_called_once_with(mock.ANY, 'id', subnet_id='subnet_id')
+
+    def test_update_metadata_port_with_subnet_present_in_port(self):
+        ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', True,
+                                       group='ovn')
+        fixed_ips = [{'subnet_id': 'subnet1', 'ip_address': 'ip_add1'}]
+        with mock.patch.object(
+                self.mech_driver._ovn_client, '_find_metadata_port',
+                return_value={'fixed_ips': fixed_ips, 'id': 'metadata_id'}), \
+                mock.patch.object(self.mech_driver._plugin, 'get_subnets',
+                                  return_value=[{'id': 'subnet1'},
+                                                {'id': 'subnet2'}]), \
+                mock.patch.object(self.mech_driver._plugin, 'update_port') as \
+                mock_update_port:
+            self.mech_driver._ovn_client.update_metadata_port(
+                self.context, 'net_id', subnet_id='subnet1')
+            mock_update_port.assert_not_called()
+
+    def test_update_metadata_port_with_subnet_not_present_in_port(self):
+        ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', True,
+                                       group='ovn')
+        fixed_ips = [{'subnet_id': 'subnet1', 'ip_address': 'ip_add1'}]
+        with mock.patch.object(
+                self.mech_driver._ovn_client, '_find_metadata_port',
+                return_value={'fixed_ips': fixed_ips, 'id': 'metadata_id'}), \
+                mock.patch.object(self.mech_driver._plugin, 'get_subnets',
+                                  return_value=[{'id': 'subnet1'},
+                                                {'id': 'subnet2'}]), \
+                mock.patch.object(self.mech_driver._plugin, 'update_port') as \
+                mock_update_port:
+            self.mech_driver._ovn_client.update_metadata_port(
+                self.context, 'net_id', subnet_id='subnet3')
+            fixed_ips.append({'subnet_id': 'subnet3'})
+            port = {'id': 'metadata_id', 'port': {
+                'network_id': 'net_id', 'fixed_ips': fixed_ips}}
+            mock_update_port.assert_called_once_with(
+                mock.ANY, 'metadata_id', port)
+
+    def test_update_metadata_port_no_subnet(self):
+        ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', True,
+                                       group='ovn')
+        fixed_ips = [{'subnet_id': 'subnet1', 'ip_address': 'ip_add1'}]
+        with mock.patch.object(
+                self.mech_driver._ovn_client, '_find_metadata_port',
+                return_value={'fixed_ips': fixed_ips, 'id': 'metadata_id'}), \
+                mock.patch.object(self.mech_driver._plugin, 'get_subnets',
+                                  return_value=[{'id': 'subnet1'},
+                                                {'id': 'subnet2'}]), \
+                mock.patch.object(self.mech_driver._plugin, 'update_port') as \
+                mock_update_port:
+            self.mech_driver._ovn_client.update_metadata_port(self.context,
+                                                              'net_id')
+            fixed_ips.append({'subnet_id': 'subnet2'})
+            port = {'id': 'metadata_id', 'port': {
+                'network_id': 'net_id', 'fixed_ips': fixed_ips}}
+            mock_update_port.assert_called_once_with(
+                mock.ANY, 'metadata_id', port)
 
     @mock.patch.object(provisioning_blocks, 'is_object_blocked')
     @mock.patch.object(provisioning_blocks, 'provisioning_complete')
@@ -2076,7 +2134,9 @@
             external_ids={},
             lport_name=ovn_utils.ovn_provnet_port_name(new_segment['id']),
             lswitch_name=ovn_utils.ovn_name(net['id']),
-            options={'network_name': 'phys_net1'},
+            options={'network_name': 'phys_net1',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD: 'true'},
             tag=200,
             type='localnet')
         ovn_nb_api.create_lswitch_port.reset_mock()
@@ -2088,7 +2148,9 @@
             external_ids={},
             lport_name=ovn_utils.ovn_provnet_port_name(new_segment['id']),
             lswitch_name=ovn_utils.ovn_name(net['id']),
-            options={'network_name': 'phys_net2'},
+            options={'network_name': 'phys_net2',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD_REPORTS: 'true',
+                     ovn_const.LSP_OPTIONS_MCAST_FLOOD: 'true'},
             tag=300,
             type='localnet')
         segments = segments_db.get_network_segments(
diff -Nru neutron-17.1.0/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py neutron-17.1.1/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py
--- neutron-17.1.0/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py	2021-03-13 02:26:48.000000000 +0100
@@ -841,12 +841,20 @@
                 'device_owner': DEVICE_OWNER_COMPUTE,
             },
         }
+        port = kwargs.get('original_port')
         l3plugin = mock.Mock()
         directory.add_plugin(plugin_constants.L3, l3plugin)
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', mock.ANY, **kwargs)
         l3plugin._get_allowed_address_pair_fixed_ips.return_value = (
             ['10.1.0.21'])
+        self.assertFalse(
+            l3plugin.update_arp_entry_for_dvr_service_port.called)
+        l3plugin.delete_arp_entry_for_dvr_service_port.\
+            assert_called_once_with(
+                self.adminContext,
+                port,
+                fixed_ips_to_delete=mock.ANY)
 
     def test__notify_l3_agent_update_port_with_allowed_address_pairs(self):
         port_id = uuidutils.generate_uuid()
@@ -874,6 +882,8 @@
         directory.add_plugin(plugin_constants.L3, l3plugin)
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', mock.ANY, **kwargs)
+        self.assertTrue(
+            l3plugin.update_arp_entry_for_dvr_service_port.called)
 
     def test__notify_l3_agent_when_unbound_port_migrates_to_bound_host(self):
         port_id = 'fake-port'
@@ -930,6 +940,8 @@
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', plugin, **kwargs)
         self.assertFalse(
+            l3plugin.update_arp_entry_for_dvr_service_port.called)
+        self.assertFalse(
             l3plugin.dvr_handle_new_service_port.called)
         self.assertFalse(l3plugin.remove_router_from_l3_agent.called)
         self.assertFalse(l3plugin.get_dvr_routers_to_remove.called)
@@ -946,6 +958,9 @@
         directory.add_plugin(plugin_constants.L3, l3plugin)
         l3_dvrscheduler_db._notify_l3_agent_new_port(
             'port', 'after_create', mock.ANY, **kwargs)
+        l3plugin.update_arp_entry_for_dvr_service_port.\
+            assert_called_once_with(
+                self.adminContext, kwargs.get('port'))
         l3plugin.dvr_handle_new_service_port.assert_called_once_with(
             self.adminContext, kwargs.get('port'))
 
@@ -962,6 +977,8 @@
         l3_dvrscheduler_db._notify_l3_agent_new_port(
             'port', 'after_create', mock.ANY, **kwargs)
         self.assertFalse(
+            l3plugin.update_arp_entry_for_dvr_service_port.called)
+        self.assertFalse(
             l3plugin.dvr_handle_new_service_port.called)
 
     def test__notify_l3_agent_update_port_with_migration_port_profile(self):
@@ -987,6 +1004,9 @@
             l3plugin.dvr_handle_new_service_port.assert_called_once_with(
                     self.adminContext, kwargs.get('port'),
                     dest_host='vm-host2', router_id=None)
+            l3plugin.update_arp_entry_for_dvr_service_port.\
+                assert_called_once_with(
+                        self.adminContext, kwargs.get('port'))
 
     def test__notify_l3_agent_update_port_no_action(self):
         kwargs = {
@@ -1006,6 +1026,8 @@
             'port', 'after_update', mock.ANY, **kwargs)
 
         self.assertFalse(
+            l3plugin.update_arp_entry_for_dvr_service_port.called)
+        self.assertFalse(
             l3plugin.dvr_handle_new_service_port.called)
         self.assertFalse(l3plugin.remove_router_from_l3_agent.called)
         self.assertFalse(l3plugin.get_dvr_routers_to_remove.called)
@@ -1029,6 +1051,10 @@
         directory.add_plugin(plugin_constants.L3, l3plugin)
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', mock.ANY, **kwargs)
+
+        l3plugin.update_arp_entry_for_dvr_service_port.\
+            assert_called_once_with(
+                self.adminContext, kwargs.get('port'))
         self.assertFalse(l3plugin.dvr_handle_new_service_port.called)
 
     def test__notify_l3_agent_update_port_with_ip_update(self):
@@ -1053,6 +1079,9 @@
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', mock.ANY, **kwargs)
 
+        l3plugin.update_arp_entry_for_dvr_service_port.\
+            assert_called_once_with(
+                self.adminContext, kwargs.get('port'))
         self.assertFalse(l3plugin.dvr_handle_new_service_port.called)
 
     def test__notify_l3_agent_update_port_without_ip_change(self):
@@ -1074,6 +1103,7 @@
         l3_dvrscheduler_db._notify_l3_agent_port_update(
             'port', 'after_update', mock.ANY, **kwargs)
 
+        self.assertFalse(l3plugin.update_arp_entry_for_dvr_service_port.called)
         self.assertFalse(l3plugin.dvr_handle_new_service_port.called)
 
     def test__notify_l3_agent_port_binding_change(self):
@@ -1159,10 +1189,15 @@
             if routers_to_remove:
                 (l3plugin.l3_rpc_notifier.router_removed_from_agent.
                  assert_called_once_with(mock.ANY, 'foo_id', source_host))
+                self.assertEqual(
+                    1,
+                    l3plugin.delete_arp_entry_for_dvr_service_port.call_count)
             if fip and is_distributed and not (routers_to_remove and
                     fip['router_id'] is routers_to_remove[0]['router_id']):
                 (l3plugin.l3_rpc_notifier.routers_updated_on_host.
                  assert_called_once_with(mock.ANY, ['router_id'], source_host))
+            self.assertEqual(
+                1, l3plugin.update_arp_entry_for_dvr_service_port.call_count)
             l3plugin.dvr_handle_new_service_port.assert_called_once_with(
                 self.adminContext, kwargs.get('port'),
                 dest_host=None, router_id=router_id)
@@ -1203,6 +1238,12 @@
             l3_dvrscheduler_db._notify_l3_agent_port_update(
                 'port', 'after_update', plugin, **kwargs)
 
+            self.assertEqual(
+                1, l3plugin.delete_arp_entry_for_dvr_service_port.call_count)
+            l3plugin.delete_arp_entry_for_dvr_service_port.\
+                assert_called_once_with(
+                    self.adminContext, mock.ANY)
+
             self.assertFalse(
                 l3plugin.dvr_handle_new_service_port.called)
             (l3plugin.l3_rpc_notifier.router_removed_from_agent.
@@ -1236,6 +1277,9 @@
         l3plugin.get_dvr_routers_to_remove.return_value = removed_routers
         l3_dvrscheduler_db._notify_port_delete(
             'port', 'after_delete', plugin, **kwargs)
+        l3plugin.delete_arp_entry_for_dvr_service_port.\
+            assert_called_once_with(
+                self.adminContext, mock.ANY)
         (l3plugin.l3_rpc_notifier.router_removed_from_agent.
          assert_called_once_with(mock.ANY, 'foo_id', 'foo_host'))
 
diff -Nru neutron-17.1.0/neutron/tests/unit/services/qos/drivers/openvswitch/test_driver.py neutron-17.1.1/neutron/tests/unit/services/qos/drivers/openvswitch/test_driver.py
--- neutron-17.1.0/neutron/tests/unit/services/qos/drivers/openvswitch/test_driver.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/services/qos/drivers/openvswitch/test_driver.py	2021-03-13 02:26:48.000000000 +0100
@@ -43,3 +43,5 @@
                     return_value=net):
                 test_method(self.driver.validate_rule_for_port(
                     mock.Mock(), rule, port))
+                test_method(self.driver.validate_rule_for_network(
+                    mock.Mock(), rule, network_id=mock.Mock()))
diff -Nru neutron-17.1.0/neutron/tests/unit/services/qos/drivers/test_manager.py neutron-17.1.1/neutron/tests/unit/services/qos/drivers/test_manager.py
--- neutron-17.1.0/neutron/tests/unit/services/qos/drivers/test_manager.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/services/qos/drivers/test_manager.py	2021-03-13 02:26:48.000000000 +0100
@@ -119,6 +119,29 @@
         else:
             is_rule_supported_mock.assert_not_called()
 
+    def test_validate_rule_for_network(self):
+        driver_manager = self._create_manager_with_drivers({
+            'driver-A': {
+                'is_loaded': True,
+                'rules': {
+                    qos_consts.RULE_TYPE_MINIMUM_BANDWIDTH: {
+                        "min_kbps": {'type:values': None},
+                        'direction': {
+                            'type:values': lib_consts.VALID_DIRECTIONS}
+                    }
+                }
+            }
+        })
+        rule = rule_object.QosMinimumBandwidthRule(
+            self.ctxt, id=uuidutils.generate_uuid())
+
+        is_rule_supported_mock = mock.Mock()
+        is_rule_supported_mock.return_value = True
+        driver_manager._drivers[0].is_rule_supported = is_rule_supported_mock
+        self.assertTrue(driver_manager.validate_rule_for_network(
+                         mock.Mock(), rule, mock.Mock()))
+        is_rule_supported_mock.assert_called_once_with(rule)
+
     def test_validate_rule_for_port_rule_vif_type_supported(self):
         port = self._get_port(
             portbindings.VIF_TYPE_OVS, portbindings.VNIC_NORMAL)
diff -Nru neutron-17.1.0/neutron/tests/unit/services/qos/test_qos_plugin.py neutron-17.1.1/neutron/tests/unit/services/qos/test_qos_plugin.py
--- neutron-17.1.0/neutron/tests/unit/services/qos/test_qos_plugin.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/services/qos/test_qos_plugin.py	2021-03-13 02:26:48.000000000 +0100
@@ -128,10 +128,8 @@
 
         if has_qos_policy:
             self.port_data['port']['qos_policy_id'] = self.policy.id
-            self.policy.rules = bw_rules
         elif has_net_qos_policy:
             self.port_data['port']['qos_network_policy_id'] = self.policy.id
-            self.policy.rules = bw_rules
 
         self.port = ports_object.Port(
             self.ctxt, **self.port_data['port'])
@@ -142,8 +140,10 @@
 
         with mock.patch('neutron.objects.network.NetworkSegment.get_objects',
                         return_value=[segment_mock]), \
-                mock.patch('neutron.objects.qos.policy.QosPolicy.get_object',
-                           return_value=self.policy):
+                mock.patch(
+                    'neutron.objects.qos.rule.QosMinimumBandwidthRule.'
+                    'get_objects',
+                    return_value=bw_rules):
             return qos_plugin.QoSPlugin._extend_port_resource_request(
                 port_res, self.port)
 
@@ -184,7 +184,7 @@
         )
 
     def test__extend_port_resource_request_non_min_bw_rule(self):
-        port = self._create_and_extend_port([self.rule])
+        port = self._create_and_extend_port([])
 
         self.assertIsNone(port.get('resource_request'))
 
@@ -202,7 +202,6 @@
 
     def test__extend_port_resource_request_inherited_policy(self):
         self.min_rule.direction = lib_constants.EGRESS_DIRECTION
-        self.policy.rules = [self.min_rule]
         self.min_rule.qos_policy_id = self.policy.id
 
         port = self._create_and_extend_port([self.min_rule],
@@ -327,6 +326,8 @@
             'neutron.objects.qos.policy.QosPolicy.get_object',
             return_value=policy_mock
         ) as get_policy, mock.patch.object(
+            self.qos_plugin, "validate_policy_for_network"
+        ) as validate_policy_for_network, mock.patch.object(
             self.qos_plugin, "validate_policy_for_ports"
         ) as validate_policy_for_ports, mock.patch.object(
             self.ctxt, "elevated", return_value=admin_ctxt
@@ -338,6 +339,7 @@
                     states=(kwargs['original_network'],)))
             if policy_id is None or policy_id == original_policy_id:
                 get_policy.assert_not_called()
+                validate_policy_for_network.assert_not_called()
                 get_ports.assert_not_called()
                 validate_policy_for_ports.assert_not_called()
             else:
@@ -385,6 +387,20 @@
             except qos_exc.QosRuleNotSupported:
                 self.fail("QosRuleNotSupported exception unexpectedly raised")
 
+    def test_validate_policy_for_network(self):
+        network = uuidutils.generate_uuid()
+        with mock.patch.object(
+            self.qos_plugin.driver_manager, "validate_rule_for_network",
+            return_value=True
+        ):
+            self.policy.rules = [self.rule]
+            try:
+                self.qos_plugin.validate_policy_for_network(
+                    self.ctxt, self.policy, network_id=network)
+            except qos_exc.QosRuleNotSupportedByNetwork:
+                self.fail("QosRuleNotSupportedByNetwork "
+                          "exception unexpectedly raised")
+
     def test_create_min_bw_rule_on_bound_port(self):
         policy = self._get_policy()
         policy.rules = [self.min_rule]
@@ -1237,6 +1253,35 @@
         network.create()
         return network
 
+    def _test_validate_create_network_callback(self, network_qos=False):
+        net_qos_obj = self._make_qos_policy()
+        net_qos_id = net_qos_obj.id if network_qos else None
+        network = self._make_network(qos_policy_id=net_qos_id)
+        kwargs = {"context": self.context,
+                  "network": network}
+
+        with mock.patch.object(self.qos_plugin,
+                               'validate_policy_for_network') \
+                as mock_validate_policy:
+            self.qos_plugin._validate_create_network_callback(
+                'NETWORK', 'precommit_create', 'test_plugin', **kwargs)
+
+        qos_policy = None
+        if network_qos:
+            qos_policy = net_qos_obj
+
+        if qos_policy:
+            mock_validate_policy.assert_called_once_with(
+                self.context, qos_policy, network.id)
+        else:
+            mock_validate_policy.assert_not_called()
+
+    def test_validate_create_network_callback(self):
+        self._test_validate_create_network_callback(network_qos=True)
+
+    def test_validate_create_network_callback_no_qos(self):
+        self._test_validate_create_network_callback(network_qos=False)
+
     def _test_validate_create_port_callback(self, port_qos=False,
                                             network_qos=False):
         net_qos_obj = self._make_qos_policy()
diff -Nru neutron-17.1.0/neutron/tests/unit/services/trunk/rpc/test_server.py neutron-17.1.1/neutron/tests/unit/services/trunk/rpc/test_server.py
--- neutron-17.1.0/neutron/tests/unit/services/trunk/rpc/test_server.py	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/neutron/tests/unit/services/trunk/rpc/test_server.py	2021-03-13 02:26:48.000000000 +0100
@@ -104,6 +104,43 @@
         for port in updated_subports[trunk['id']]:
             self.assertEqual('trunk_host_id', port[portbindings.HOST_ID])
 
+    def test_update_subport_bindings_during_migration(self):
+        with self.port() as _parent_port:
+            parent_port = _parent_port
+        trunk = self._create_test_trunk(parent_port)
+        subports = []
+        for vid in range(0, 3):
+            with self.port() as new_port:
+                obj = trunk_obj.SubPort(
+                    context=self.context,
+                    trunk_id=trunk['id'],
+                    port_id=new_port['port']['id'],
+                    segmentation_type='vlan',
+                    segmentation_id=vid)
+                subports.append(obj)
+
+        expected_calls = [
+            mock.call(
+                mock.ANY, subport['port_id'],
+                {'port': {portbindings.HOST_ID: 'new_trunk_host_id',
+                          'device_owner': constants.TRUNK_SUBPORT_OWNER}})
+            for subport in subports]
+
+        test_obj = server.TrunkSkeleton()
+        test_obj._trunk_plugin = self.trunk_plugin
+        test_obj._core_plugin = self.core_plugin
+        port_data = {
+            portbindings.HOST_ID: 'trunk_host_id',
+            portbindings.PROFILE: {'migrating_to': 'new_trunk_host_id'}}
+        with mock.patch.object(
+                self.core_plugin, "get_port",
+                return_value=port_data), \
+            mock.patch.object(
+                test_obj, "_safe_update_trunk"):
+            test_obj.update_subport_bindings(self.context, subports=subports)
+        for expected_call in expected_calls:
+            self.assertIn(expected_call, self.mock_update_port.mock_calls)
+
     def test__handle_port_binding_binding_error(self):
         with self.port() as _trunk_port:
             trunk = self._create_test_trunk(_trunk_port)
diff -Nru neutron-17.1.0/releasenotes/notes/do-not-create-dhcp-entries-for-all-types-of-ports-39c03b3782d2753e.yaml neutron-17.1.1/releasenotes/notes/do-not-create-dhcp-entries-for-all-types-of-ports-39c03b3782d2753e.yaml
--- neutron-17.1.0/releasenotes/notes/do-not-create-dhcp-entries-for-all-types-of-ports-39c03b3782d2753e.yaml	1970-01-01 01:00:00.000000000 +0100
+++ neutron-17.1.1/releasenotes/notes/do-not-create-dhcp-entries-for-all-types-of-ports-39c03b3782d2753e.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -0,0 +1,6 @@
+---
+other:
+  - |
+    To improve performance of the DHCP agent, it will no longer configure the DHCP server
+    for every port type created in Neutron. For example, for floating IP or router HA
+    interfaces there is no need since a client will not make a DHCP request for them
diff -Nru neutron-17.1.0/releasenotes/notes/ovn-mcast-flood-reports-80fb529120f2af1c.yaml neutron-17.1.1/releasenotes/notes/ovn-mcast-flood-reports-80fb529120f2af1c.yaml
--- neutron-17.1.0/releasenotes/notes/ovn-mcast-flood-reports-80fb529120f2af1c.yaml	1970-01-01 01:00:00.000000000 +0100
+++ neutron-17.1.1/releasenotes/notes/ovn-mcast-flood-reports-80fb529120f2af1c.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -0,0 +1,7 @@
+---
+fixes:
+  - |
+    Fixes a configuration problem in the OVN driver that prevented
+    external IGMP queries from reaching the Virtual Machines. See
+    `bug 1918108 <https://bugs.launchpad.net/neutron/+bug/1918108>`_
+    for details.
diff -Nru neutron-17.1.0/tox.ini neutron-17.1.1/tox.ini
--- neutron-17.1.0/tox.ini	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/tox.ini	2021-03-13 02:26:48.000000000 +0100
@@ -71,12 +71,15 @@
          # workaround for DB teardown lock contention (bug/1541742)
          OS_TEST_TIMEOUT={env:OS_TEST_TIMEOUT:600}
          OS_TEST_PATH=./neutron/tests/fullstack
+# Because of issue with stestr and Python3, we need to avoid too much output
+# to be produced during tests, so we will ignore python warnings here
+         PYTHONWARNINGS=ignore
 deps =
   {[testenv:functional]deps}
 commands =
   {toxinidir}/tools/generate_dhclient_script_for_fullstack.sh {envdir}
   {toxinidir}/tools/deploy_rootwrap.sh {toxinidir} {envdir}/etc {envdir}/bin
-  stestr run --concurrency 4 {posargs}
+  stestr run --concurrency 3 {posargs}
 
 [testenv:dsvm-fullstack-gate]
 setenv = {[testenv:dsvm-fullstack]setenv}
diff -Nru neutron-17.1.0/zuul.d/base.yaml neutron-17.1.1/zuul.d/base.yaml
--- neutron-17.1.0/zuul.d/base.yaml	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/zuul.d/base.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -24,6 +24,7 @@
       devstack_services:
         # Ignore any default set by devstack. Emit a "disable_all_services".
         base: false
+        etcd3: false
       devstack_localrc:
         INSTALL_TESTONLY_PACKAGES: true
         DATABASE_PASSWORD: stackdb
diff -Nru neutron-17.1.0/zuul.d/rally.yaml neutron-17.1.1/zuul.d/rally.yaml
--- neutron-17.1.0/zuul.d/rally.yaml	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/zuul.d/rally.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -41,7 +41,6 @@
     parent: rally-task-at-devstack
     required-projects:
       - name: openstack/devstack
-      - name: openstack/devstack-gate
       - name: openstack/rally
       - name: openstack/rally-openstack
     irrelevant-files: *irrelevant-files
diff -Nru neutron-17.1.0/zuul.d/tempest-multinode.yaml neutron-17.1.1/zuul.d/tempest-multinode.yaml
--- neutron-17.1.0/zuul.d/tempest-multinode.yaml	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/zuul.d/tempest-multinode.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -6,7 +6,6 @@
     roles:
       - zuul: openstack/neutron-tempest-plugin
     required-projects:
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/tempest
     pre-run: playbooks/dvr-multinode-scenario-pre-run.yaml
@@ -108,7 +107,6 @@
     parent: tempest-multinode-full-py3
     timeout: 10800
     required-projects:
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/neutron-tempest-plugin
       - openstack/tempest
diff -Nru neutron-17.1.0/zuul.d/tempest-singlenode.yaml neutron-17.1.1/zuul.d/tempest-singlenode.yaml
--- neutron-17.1.0/zuul.d/tempest-singlenode.yaml	2021-01-22 03:31:35.000000000 +0100
+++ neutron-17.1.1/zuul.d/tempest-singlenode.yaml	2021-03-13 02:26:48.000000000 +0100
@@ -5,7 +5,6 @@
     abstract: true
     timeout: 10800
     required-projects:
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/tempest
     pre-run: playbooks/configure_ebtables.yaml
@@ -126,7 +125,6 @@
     parent: tempest-integrated-networking
     timeout: 10800
     required-projects:
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/tempest
     vars:
@@ -144,7 +142,6 @@
     parent: tempest-integrated-networking
     timeout: 10800
     required-projects:
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/tempest
     vars:
@@ -181,7 +178,6 @@
     parent: tempest-integrated-networking
     timeout: 10800
     required-projects: &ovn-base-required-projects
-      - openstack/devstack-gate
       - openstack/neutron
       - openstack/tempest
     irrelevant-files: *irrelevant-files

--- End Message ---
--- Begin Message ---
Unblocked.

--- End Message ---

Reply to: