[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#864027: unblock: swift/2.10.2-1



Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock

This is pre-approval. Please allow unblock of package swift/2.10.2-1

This is new upstream STABLE (minor version) release. This is only backports of
fixies from master. I removed 3 patches:
- Quarantine_malformed_database_schema_SQLite_errors.patch
- For_any_part_only_one_replica_can_move_in_a_rebalance.patch
- FTBFS_i386.patch
because it's applied upstream in this release.

Debdiff attached.

Changelog, with fixies:

swift (2.10.2, stable release update)

    * Improvements in key parts of the consistency engine

      - Optimized the common case for hashing filesystem trees, thus
        eliminating a lot of extraneous disk I/O.

      - Updated the `hashes.pkl` file format to include timestamp information
        for race detection. Also simplified hashing logic to prevent race
        conditions and optimize for the common case.

      Upgrade Impact: If you upgrade and roll back, you must delete all
      `hashes.pkl` files.

    * If using erasure coding with ISA-L in rs_vand mode and 5 or more parity
      fragments, Swift will emit a warning. This is a configuration that is
      known to harm data durability. In a future release, this warning will be
      upgraded to an error unless the policy is marked as deprecated. All data
      in an erasure code storage policy using isa_l_rs_vand with 5 or more
      parity should be migrated as soon as possible. Please see
      https://bugs.launchpad.net/swift/+bug/1639691 for more information.

    * Fixed a bug where the ring builder would not allow removal of a device
      when min_part_seconds_left was greater than zero.

    * PUT subrequests generated from a client-side COPY will now properly log
      the SSC (server-side copy) Swift source field. See
      https://docs.openstack.org/developer/swift/logs.html#swift-source for
      more information.

    * Rings with min_part_hours set to zero will now only move one partition
      replica per rebalance, thus matching behavior when min_part_hours is
      greater than zero.

    * Correctly send 412 Precondition Failed if a user sends an
      invalid copy destination. Previously Swift would send a 500
      Internal Server Error.

    * Fixed error where a container drive error resulted in double space
      usage on rest drives. When drive with container or account database
      is unmounted, the bug would create handoff replicas on all remaining
      drives, increasing the drive space used and filling the cluster.

    * Account and container databases will now be quarantined if the
      database schema has been corrupted.

    * Ensure update of the container by object-updater, removing a rare
      possibility that objects would never be added to a container listing.

    * Fixed some minor test compatibility issues.

    * Updated docs to reference appropriate ports.

Thanks for considering.

unblock swift/2.10.2-1

-- System Information:
Debian Release: 9.0
  APT prefers unstable
  APT policy: (500, 'unstable'), (500, 'testing'), (500, 'stable'), (1, 'experimental')
Architecture: amd64
 (x86_64)

Kernel: Linux 4.9.0-3-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
diff -Nru swift-2.10.1/api-ref/source/samples/endpoints-list-response-headers.json swift-2.10.2/api-ref/source/samples/endpoints-list-response-headers.json
--- swift-2.10.1/api-ref/source/samples/endpoints-list-response-headers.json	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/api-ref/source/samples/endpoints-list-response-headers.json	2017-05-26 21:40:54.000000000 +0200
@@ -1,12 +1,12 @@
 {
     "endpoints": [
-        "http://storage01.swiftdrive.com:6008/d8/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage02.swiftdrive.com:6008/d2/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage02.swiftdrive.com:6006/d3/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage02.swiftdrive.com:6008/d5/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage01.swiftdrive.com:6007/d7/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage02.swiftdrive.com:6007/d4/583/AUTH_dev/EC_cont1/obj";,
-        "http://storage01.swiftdrive.com:6006/d6/583/AUTH_dev/EC_cont1/obj";
+        "http://storage01.swiftdrive.com:6208/d8/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage02.swiftdrive.com:6208/d2/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage02.swiftdrive.com:6206/d3/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage02.swiftdrive.com:6208/d5/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage01.swiftdrive.com:6207/d7/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage02.swiftdrive.com:6207/d4/583/AUTH_dev/EC_cont1/obj";,
+        "http://storage01.swiftdrive.com:6206/d6/583/AUTH_dev/EC_cont1/obj";
     ],
     "headers": {
         "X-Backend-Storage-Policy-Index": "2"
diff -Nru swift-2.10.1/api-ref/source/samples/endpoints-list-response.json swift-2.10.2/api-ref/source/samples/endpoints-list-response.json
--- swift-2.10.1/api-ref/source/samples/endpoints-list-response.json	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/api-ref/source/samples/endpoints-list-response.json	2017-05-26 21:40:54.000000000 +0200
@@ -1,8 +1,8 @@
 {
     "endpoints": [
-        "http://storage02.swiftdrive:6002/d2/617/AUTH_dev";,
-        "http://storage01.swiftdrive:6002/d8/617/AUTH_dev";,
-        "http://storage01.swiftdrive:6002/d11/617/AUTH_dev";
+        "http://storage02.swiftdrive:6202/d2/617/AUTH_dev";,
+        "http://storage01.swiftdrive:6202/d8/617/AUTH_dev";,
+        "http://storage01.swiftdrive:6202/d11/617/AUTH_dev";
     ],
     "headers": {}
 }
diff -Nru swift-2.10.1/CHANGELOG swift-2.10.2/CHANGELOG
--- swift-2.10.1/CHANGELOG	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/CHANGELOG	2017-05-26 21:40:54.000000000 +0200
@@ -1,3 +1,56 @@
+swift (2.10.2, stable release update)
+
+    * Improvements in key parts of the consistency engine
+
+      - Optimized the common case for hashing filesystem trees, thus
+        eliminating a lot of extraneous disk I/O.
+
+      - Updated the `hashes.pkl` file format to include timestamp information
+        for race detection. Also simplified hashing logic to prevent race
+        conditions and optimize for the common case.
+
+      Upgrade Impact: If you upgrade and roll back, you must delete all
+      `hashes.pkl` files.
+
+    * If using erasure coding with ISA-L in rs_vand mode and 5 or more parity
+      fragments, Swift will emit a warning. This is a configuration that is
+      known to harm data durability. In a future release, this warning will be
+      upgraded to an error unless the policy is marked as deprecated. All data
+      in an erasure code storage policy using isa_l_rs_vand with 5 or more
+      parity should be migrated as soon as possible. Please see
+      https://bugs.launchpad.net/swift/+bug/1639691 for more information.
+
+    * Fixed a bug where the ring builder would not allow removal of a device
+      when min_part_seconds_left was greater than zero.
+
+    * PUT subrequests generated from a client-side COPY will now properly log
+      the SSC (server-side copy) Swift source field. See
+      https://docs.openstack.org/developer/swift/logs.html#swift-source for
+      more information.
+
+    * Rings with min_part_hours set to zero will now only move one partition
+      replica per rebalance, thus matching behavior when min_part_hours is
+      greater than zero.
+
+    * Correctly send 412 Precondition Failed if a user sends an
+      invalid copy destination. Previously Swift would send a 500
+      Internal Server Error.
+
+    * Fixed error where a container drive error resulted in double space
+      usage on rest drives. When drive with container or account database
+      is unmounted, the bug would create handoff replicas on all remaining
+      drives, increasing the drive space used and filling the cluster.
+
+    * Account and container databases will now be quarantined if the
+      database schema has been corrupted.
+
+    * Ensure update of the container by object-updater, removing a rare
+      possibility that objects would never be added to a container listing.
+
+    * Fixed some minor test compatibility issues.
+
+    * Updated docs to reference appropriate ports.
+
 swift (2.10.1, stable release update)
 
     * Closed a bug where ssync may have written bad fragment data in
diff -Nru swift-2.10.1/debian/changelog swift-2.10.2/debian/changelog
--- swift-2.10.1/debian/changelog	2017-04-20 22:51:28.000000000 +0200
+++ swift-2.10.2/debian/changelog	2017-06-03 13:27:25.000000000 +0200
@@ -1,3 +1,13 @@
+swift (2.10.2-1) UNRELEASED; urgency=medium
+
+  * New upstream stable release
+  * Removed patches applied upstream:
+    - Quarantine_malformed_database_schema_SQLite_errors.patch
+    - For_any_part_only_one_replica_can_move_in_a_rebalance.patch
+    - FTBFS_i386.patch
+
+ -- Ondřej Nový <onovy@debian.org>  Sat, 03 Jun 2017 13:27:25 +0200
+
 swift (2.10.1-3) unstable; urgency=medium
 
   * d/patches/FTBFS_i386.patch: Fix FTBFS on i386 (Closes: #860638)
diff -Nru swift-2.10.1/debian/patches/For_any_part_only_one_replica_can_move_in_a_rebalance.patch swift-2.10.2/debian/patches/For_any_part_only_one_replica_can_move_in_a_rebalance.patch
--- swift-2.10.1/debian/patches/For_any_part_only_one_replica_can_move_in_a_rebalance.patch	2017-04-20 22:51:28.000000000 +0200
+++ swift-2.10.2/debian/patches/For_any_part_only_one_replica_can_move_in_a_rebalance.patch	1970-01-01 01:00:00.000000000 +0100
@@ -1,234 +0,0 @@
-From e5dd050113646a93a0fe7fb1aed4f1cafdc9139f Mon Sep 17 00:00:00 2001
-From: cheng <shcli@cn.ibm.com>
-Date: Sun, 24 Jul 2016 01:10:36 +0000
-Subject: [PATCH] For any part, only one replica can move in a rebalance
-
-With a min_part_hours of zero, it's possible to move more than one
-replicas of the same part in a single rebalance.
-
-This change in behavior only effects min_part_hour zero rings, which
-are understood to be uncommon in production mostly because of this
-very specific and strange behavior of min_part_hour zero rings.
-
-With this change, no matter how small your min_part_hours it will
-always require at least N rebalances to move N part-replicas of the
-same part.
-
-To supplement the existing persisted _last_part_moves structure to
-enforce min_part_hours, this change adds a _part_moved_bitmap that
-exists only during the life of the rebalance, to track when rebalance
-moves a part in order to prevent another replicas of the same part
-from being moved in the same rebalance.
-
-Add authors: Clay Gerrard, clay.gerrard@gmail.com
-             Christian Schwede, cschwede@redhat.com
-
-Closes-bug: #1586167
-
-Change-Id: Ia1629abd5ce6e1b3acc2e94f818ed8223eed993a
-(cherry picked from commit ce26e78)
----
-
-diff --git a/swift/common/ring/builder.py b/swift/common/ring/builder.py
-index ef4f095..f4f2a83 100644
---- a/swift/common/ring/builder.py
-+++ b/swift/common/ring/builder.py
-@@ -108,6 +108,8 @@
-         # a device overrides this behavior as it's assumed that's only
-         # done because of device failure.
-         self._last_part_moves = None
-+        # _part_moved_bitmap record parts have been moved
-+        self._part_moved_bitmap = None
-         # _last_part_moves_epoch indicates the time the offsets in
-         # _last_part_moves is based on.
-         self._last_part_moves_epoch = 0
-@@ -124,6 +126,19 @@
-             self.logger.disabled = True
-             # silence "no handler for X" error messages
-             self.logger.addHandler(NullHandler())
-+
-+    def _set_part_moved(self, part):
-+        self._last_part_moves[part] = 0
-+        byte, bit = divmod(part, 8)
-+        self._part_moved_bitmap[byte] |= (128 >> bit)
-+
-+    def _has_part_moved(self, part):
-+        byte, bit = divmod(part, 8)
-+        return bool(self._part_moved_bitmap[byte] & (128 >> bit))
-+
-+    def _can_part_move(self, part):
-+        return (self._last_part_moves[part] >= self.min_part_hours and
-+                not self._has_part_moved(part))
- 
-     @contextmanager
-     def debug(self):
-@@ -437,6 +452,7 @@
-         if self._last_part_moves is None:
-             self.logger.debug("New builder; performing initial balance")
-             self._last_part_moves = array('B', itertools.repeat(0, self.parts))
-+        self._part_moved_bitmap = bytearray(max(2 ** (self.part_power - 3), 1))
-         self._update_last_part_moves()
- 
-         replica_plan = self._build_replica_plan()
-@@ -876,7 +892,7 @@
-                     dev_id = self._replica2part2dev[replica][part]
-                     if dev_id in dev_ids:
-                         self._replica2part2dev[replica][part] = NONE_DEV
--                        self._last_part_moves[part] = 0
-+                        self._set_part_moved(part)
-                         assign_parts[part].append(replica)
-                         self.logger.debug(
-                             "Gathered %d/%d from dev %d [dev removed]",
-@@ -964,7 +980,7 @@
-         # Now we gather partitions that are "at risk" because they aren't
-         # currently sufficient spread out across the cluster.
-         for part in range(self.parts):
--            if self._last_part_moves[part] < self.min_part_hours:
-+            if (not self._can_part_move(part)):
-                 continue
-             # First, add up the count of replicas at each tier for each
-             # partition.
-@@ -996,7 +1012,7 @@
-                 # has more than one replica of a part assigned to it - which
-                 # would have only been possible on rings built with an older
-                 # version of the code
--                if (self._last_part_moves[part] < self.min_part_hours and
-+                if (not self._can_part_move(part) and
-                         not replicas_at_tier[dev['tiers'][-1]] > 1):
-                     continue
-                 dev['parts_wanted'] += 1
-@@ -1008,7 +1024,7 @@
-                 self._replica2part2dev[replica][part] = NONE_DEV
-                 for tier in dev['tiers']:
-                     replicas_at_tier[tier] -= 1
--                self._last_part_moves[part] = 0
-+                self._set_part_moved(part)
- 
-     def _gather_parts_for_balance_can_disperse(self, assign_parts, start,
-                                                replica_plan):
-@@ -1025,7 +1041,7 @@
-         # they have more partitions than their parts_wanted.
-         for offset in range(self.parts):
-             part = (start + offset) % self.parts
--            if self._last_part_moves[part] < self.min_part_hours:
-+            if (not self._can_part_move(part)):
-                 continue
-             # For each part we'll look at the devices holding those parts and
-             # see if any are overweight, keeping track of replicas_at_tier as
-@@ -1048,7 +1064,7 @@
-             overweight_dev_replica.sort(
-                 key=lambda dr: dr[0]['parts_wanted'])
-             for dev, replica in overweight_dev_replica:
--                if self._last_part_moves[part] < self.min_part_hours:
-+                if (not self._can_part_move(part)):
-                     break
-                 if any(replica_plan[tier]['min'] <=
-                        replicas_at_tier[tier] <
-@@ -1067,7 +1083,7 @@
-                 self._replica2part2dev[replica][part] = NONE_DEV
-                 for tier in dev['tiers']:
-                     replicas_at_tier[tier] -= 1
--                self._last_part_moves[part] = 0
-+                self._set_part_moved(part)
- 
-     def _gather_parts_for_balance(self, assign_parts, replica_plan):
-         """
-@@ -1107,7 +1123,7 @@
-         """
-         for offset in range(self.parts):
-             part = (start + offset) % self.parts
--            if self._last_part_moves[part] < self.min_part_hours:
-+            if (not self._can_part_move(part)):
-                 continue
-             overweight_dev_replica = []
-             for replica in self._replicas_for_part(part):
-@@ -1124,7 +1140,7 @@
-             overweight_dev_replica.sort(
-                 key=lambda dr: dr[0]['parts_wanted'])
-             for dev, replica in overweight_dev_replica:
--                if self._last_part_moves[part] < self.min_part_hours:
-+                if (not self._can_part_move(part)):
-                     break
-                 # this is the most overweight_device holding a replica of this
-                 # part we don't know where it's going to end up - but we'll
-@@ -1136,7 +1152,7 @@
-                     "Gathered %d/%d from dev %d [weight forced]",
-                     part, replica, dev['id'])
-                 self._replica2part2dev[replica][part] = NONE_DEV
--                self._last_part_moves[part] = 0
-+                self._set_part_moved(part)
- 
-     def _reassign_parts(self, reassign_parts, replica_plan):
-         """
-diff --git a/test/unit/common/ring/test_builder.py b/test/unit/common/ring/test_builder.py
-index 9702730..0007f10 100644
---- a/test/unit/common/ring/test_builder.py
-+++ b/test/unit/common/ring/test_builder.py
-@@ -723,7 +723,7 @@
-                     "Partition %d not in zones 0 and 1 (got %r)" %
-                     (part, zones))
- 
--    def test_min_part_hours_zero_will_move_whatever_it_takes(self):
-+    def test_min_part_hours_zero_will_move_one_replica(self):
-         rb = ring.RingBuilder(8, 3, 0)
-         # there'll be at least one replica in z0 and z1
-         rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5,
-@@ -747,6 +747,33 @@
-         rb.validate()
- 
-         self.assertEqual(0, rb.dispersion)
-+        # Only one replica could move, so some zones are quite unbalanced
-+        self.assertAlmostEqual(rb.get_balance(), 66.66, delta=0.5)
-+
-+        # There was only zone 0 and 1 before adding more devices. Only one
-+        # replica should have been moved, therefore we expect 256 parts in zone
-+        # 0 and 1, and a total of 256 in zone 2,3, and 4
-+        expected = defaultdict(int, {0: 256, 1: 256, 2: 86, 3: 85, 4: 85})
-+        self.assertEqual(expected, self._partition_counts(rb, key='zone'))
-+
-+        parts_with_moved_count = defaultdict(int)
-+        for part in range(rb.parts):
-+            zones = set()
-+            for replica in range(rb.replicas):
-+                zones.add(rb.devs[rb._replica2part2dev[replica][part]]['zone'])
-+            moved_replicas = len(zones - {0, 1})
-+            parts_with_moved_count[moved_replicas] += 1
-+
-+        # We expect that every partition moved exactly one replica
-+        expected = {1: 256}
-+        self.assertEqual(parts_with_moved_count, expected)
-+
-+        # After rebalancing two more times, we expect that everything is in a
-+        # good state
-+        rb.rebalance(seed=3)
-+        rb.rebalance(seed=3)
-+
-+        self.assertEqual(0, rb.dispersion)
-         # a balance of w/i a 1% isn't too bad for 3 replicas on 7
-         # devices when part power is only 8
-         self.assertAlmostEqual(rb.get_balance(), 0, delta=0.5)
-diff --git a/test/unit/common/ring/test_utils.py b/test/unit/common/ring/test_utils.py
-index c6d6b21..c40ce85 100644
---- a/test/unit/common/ring/test_utils.py
-+++ b/test/unit/common/ring/test_utils.py
-@@ -664,11 +664,18 @@
-                     'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'})
- 
-         # when the biggest tier has the smallest devices things get ugly
-+        # can't move all the part-replicas in one rebalance
-         rb.rebalance(seed=100)
-         report = dispersion_report(rb, verbose=True)
--        self.assertEqual(rb.dispersion, 70.3125)
-+        self.assertEqual(rb.dispersion, 9.375)
-+        self.assertEqual(report['worst_tier'], 'r1z1-127.0.0.1')
-+        self.assertEqual(report['max_dispersion'], 7.18562874251497)
-+        # do a sencond rebalance
-+        rb.rebalance(seed=100)
-+        report = dispersion_report(rb, verbose=True)
-+        self.assertEqual(rb.dispersion, 50.0)
-         self.assertEqual(report['worst_tier'], 'r1z0-127.0.0.3')
--        self.assertEqual(report['max_dispersion'], 88.23529411764706)
-+        self.assertEqual(report['max_dispersion'], 50.0)
- 
-         # ... but overload can square it
-         rb.set_overload(rb.get_required_overload())
diff -Nru swift-2.10.1/debian/patches/FTBFS_i386.patch swift-2.10.2/debian/patches/FTBFS_i386.patch
--- swift-2.10.1/debian/patches/FTBFS_i386.patch	2017-04-20 22:51:28.000000000 +0200
+++ swift-2.10.2/debian/patches/FTBFS_i386.patch	1970-01-01 01:00:00.000000000 +0100
@@ -1,26 +0,0 @@
-From cc8ddf97b666ab91d48f6bc382f6bbdde52cc931 Mon Sep 17 00:00:00 2001
-From: Ondřej Nový <ondrej.novy@firma.seznam.cz>
-Date: Thu, 20 Apr 2017 16:57:15 +0200
-Subject: [PATCH] Fix unit tests on i386 and other archs
-Forwarded: https://review.openstack.org/#/c/458539/
-
-Change-Id: I4f84b725e220e28919570fd7f296b63b34d0375d
----
-
-diff --git a/test/unit/common/test_utils.py b/test/unit/common/test_utils.py
-index c412b95..a00faf9 100644
---- a/test/unit/common/test_utils.py
-+++ b/test/unit/common/test_utils.py
-@@ -3591,6 +3591,12 @@
-         def _fake_syscall(*args):
-             called['syscall'] = args
- 
-+        # Test if current architecture supports changing of priority
-+        try:
-+            utils.NR_ioprio_set()
-+        except OSError as e:
-+            return unittest.skip(e)
-+
-         with patch('swift.common.utils._libc_setpriority',
-                    _fake_setpriority), \
-                 patch('swift.common.utils._posix_syscall', _fake_syscall):
diff -Nru swift-2.10.1/debian/patches/Quarantine_malformed_database_schema_SQLite_errors.patch swift-2.10.2/debian/patches/Quarantine_malformed_database_schema_SQLite_errors.patch
--- swift-2.10.1/debian/patches/Quarantine_malformed_database_schema_SQLite_errors.patch	2017-04-20 22:51:28.000000000 +0200
+++ swift-2.10.2/debian/patches/Quarantine_malformed_database_schema_SQLite_errors.patch	1970-01-01 01:00:00.000000000 +0100
@@ -1,95 +0,0 @@
-From ea1ecf3d8b097a5d24fe81f2c5ee9ec390d41809 Mon Sep 17 00:00:00 2001
-From: Matthew Oliver <matt@oliver.net.au>
-Date: Thu, 01 Dec 2016 09:46:53 +1100
-Subject: [PATCH] Quarantine malformed database schema SQLite errors
-
-Currently if an sqlite3.DatabaseError is thrown when caused by
-a corrupted database schema, it get logged and the database is isn't
-quarantined.
-
-This patch adds the malformed database schema case to the list of
-SQLite errors in possibly_quarantine that will trigger the db to be
-quarantined.
-
-Also it improved the possibly_quarantined unit test to test all existing
-exceptions, and catches exceptions based on the real world except we use
-in code.
-
-Closes-Bug: #1646247
-
-Change-Id: Id9452c88f8394a2a910c34c69361442543aa206d
-(cherry picked from commit 3bde14b)
----
-
-diff --git a/swift/common/db.py b/swift/common/db.py
-index 1f06694..fc5c057 100644
---- a/swift/common/db.py
-+++ b/swift/common/db.py
-@@ -331,6 +331,8 @@
-         """
-         if 'database disk image is malformed' in str(exc_value):
-             exc_hint = 'malformed'
-+        elif 'malformed database schema' in str(exc_value):
-+            exc_hint = 'malformed'
-         elif 'file is encrypted or is not a database' in str(exc_value):
-             exc_hint = 'corrupted'
-         elif 'disk I/O error' in str(exc_value):
-index 45949c9..dd580b2 100644
---- a/test/unit/common/test_db.py
-+++ b/test/unit/common/test_db.py
-@@ -1214,29 +1230,36 @@
-             message = str(e)
-         self.assertEqual(message, '400 Bad Request')
- 
--    def test_possibly_quarantine_disk_error(self):
-+    def test_possibly_quarantine_db_errors(self):
-         dbpath = os.path.join(self.testdir, 'dev', 'dbs', 'par', 'pre', 'db')
--        mkdirs(dbpath)
-         qpath = os.path.join(self.testdir, 'dev', 'quarantined', 'tests', 'db')
--        broker = DatabaseBroker(os.path.join(dbpath, '1.db'))
--        broker.db_type = 'test'
-+        # Data is a list of Excpetions to be raised and expected values in the
-+        # log
-+        data = [
-+            (sqlite3.DatabaseError('database disk image is malformed'),
-+             'malformed'),
-+            (sqlite3.DatabaseError('malformed database schema'), 'malformed'),
-+            (sqlite3.DatabaseError('file is encrypted or is not a database'),
-+             'corrupted'),
-+            (sqlite3.OperationalError('disk I/O error'),
-+             'disk error while accessing')]
- 
--        def stub():
--            raise sqlite3.OperationalError('disk I/O error')
--
--        try:
--            stub()
--        except Exception:
-+        for i, (ex, hint) in enumerate(data):
-+            mkdirs(dbpath)
-+            broker = DatabaseBroker(os.path.join(dbpath, '%d.db' % (i)))
-+            broker.db_type = 'test'
-             try:
--                broker.possibly_quarantine(*sys.exc_info())
--            except Exception as exc:
--                self.assertEqual(
--                    str(exc),
--                    'Quarantined %s to %s due to disk error '
--                    'while accessing database' %
--                    (dbpath, qpath))
--            else:
--                self.fail('Expected an exception to be raised')
-+                raise ex
-+            except (sqlite3.DatabaseError, DatabaseConnectionError):
-+                try:
-+                    broker.possibly_quarantine(*sys.exc_info())
-+                except Exception as exc:
-+                    self.assertEqual(
-+                        str(exc),
-+                        'Quarantined %s to %s due to %s database' %
-+                        (dbpath, qpath, hint))
-+                else:
-+                    self.fail('Expected an exception to be raised')
- 
- if __name__ == '__main__':
-     unittest.main()
diff -Nru swift-2.10.1/debian/patches/series swift-2.10.2/debian/patches/series
--- swift-2.10.1/debian/patches/series	2017-04-20 22:51:28.000000000 +0200
+++ swift-2.10.2/debian/patches/series	2017-06-03 13:27:25.000000000 +0200
@@ -1,5 +1,2 @@
 sphinx_reproducible_build.patch
 syslog_log_name.patch
-Quarantine_malformed_database_schema_SQLite_errors.patch
-For_any_part_only_one_replica_can_move_in_a_rebalance.patch
-FTBFS_i386.patch
diff -Nru swift-2.10.1/install-guide/source/initial-rings.rst swift-2.10.2/install-guide/source/initial-rings.rst
--- swift-2.10.1/install-guide/source/initial-rings.rst	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/install-guide/source/initial-rings.rst	2017-05-26 21:40:54.000000000 +0200
@@ -36,7 +36,7 @@
    .. code-block:: console
 
       # swift-ring-builder account.builder \
-        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6002 \
+        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 \
         --device DEVICE_NAME --weight DEVICE_WEIGHT
 
    Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
@@ -48,7 +48,7 @@
    .. code-block:: console
 
       # swift-ring-builder account.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device sdb --weight 100
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6202 --device sdb --weight 100
 
    Repeat this command for each storage device on each storage node. In the
    example architecture, use the command in four variations:
@@ -56,17 +56,17 @@
    .. code-block:: console
 
       # swift-ring-builder account.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device sdb --weight 100
-      Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/sdb_"" with 100.0 weight got id 0
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6202 --device sdb --weight 100
+      Device d0r1z1-10.0.0.51:6202R10.0.0.51:6202/sdb_"" with 100.0 weight got id 0
       # swift-ring-builder account.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device sdc --weight 100
-      Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/sdc_"" with 100.0 weight got id 1
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6202 --device sdc --weight 100
+      Device d1r1z2-10.0.0.51:6202R10.0.0.51:6202/sdc_"" with 100.0 weight got id 1
       # swift-ring-builder account.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6002 --device sdb --weight 100
-      Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/sdb_"" with 100.0 weight got id 2
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6202 --device sdb --weight 100
+      Device d2r1z3-10.0.0.52:6202R10.0.0.52:6202/sdb_"" with 100.0 weight got id 2
       # swift-ring-builder account.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6002 --device sdc --weight 100
-      Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/sdc_"" with 100.0 weight got id 3
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6202 --device sdc --weight 100
+      Device d3r1z4-10.0.0.52:6202R10.0.0.52:6202/sdc_"" with 100.0 weight got id 3
 
 #. Verify the ring contents:
 
@@ -78,10 +78,10 @@
       The minimum number of hours before a partition can be reassigned is 1
       The overload factor is 0.00% (0.000000)
       Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
-                   0       1     1       10.0.0.51  6002       10.0.0.51              6002      sdb  100.00          0 -100.00
-                   1       1     1       10.0.0.51  6002       10.0.0.51              6002      sdc  100.00          0 -100.00
-                   2       1     2       10.0.0.52  6002       10.0.0.52              6002      sdb  100.00          0 -100.00
-                   3       1     2       10.0.0.52  6002       10.0.0.52              6002      sdc  100.00          0 -100.00
+                   0       1     1       10.0.0.51  6202       10.0.0.51              6202      sdb  100.00          0 -100.00
+                   1       1     1       10.0.0.51  6202       10.0.0.51              6202      sdc  100.00          0 -100.00
+                   2       1     2       10.0.0.52  6202       10.0.0.52              6202      sdb  100.00          0 -100.00
+                   3       1     2       10.0.0.52  6202       10.0.0.52              6202      sdc  100.00          0 -100.00
 
 #. Rebalance the ring:
 
@@ -113,7 +113,7 @@
    .. code-block:: console
 
       # swift-ring-builder container.builder \
-        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6001 \
+        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
         --device DEVICE_NAME --weight DEVICE_WEIGHT
 
    Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
@@ -125,7 +125,7 @@
    .. code-block:: console
 
       # swift-ring-builder container.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device sdb --weight 100
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6201 --device sdb --weight 100
 
    Repeat this command for each storage device on each storage node. In the
    example architecture, use the command in four variations:
@@ -133,17 +133,17 @@
    .. code-block:: console
 
       # swift-ring-builder container.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device sdb --weight 100
-      Device d0r1z1-10.0.0.51:6001R10.0.0.51:6001/sdb_"" with 100.0 weight got id 0
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6201 --device sdb --weight 100
+      Device d0r1z1-10.0.0.51:6201R10.0.0.51:6201/sdb_"" with 100.0 weight got id 0
       # swift-ring-builder container.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device sdc --weight 100
-      Device d1r1z2-10.0.0.51:6001R10.0.0.51:6001/sdc_"" with 100.0 weight got id 1
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6201 --device sdc --weight 100
+      Device d1r1z2-10.0.0.51:6201R10.0.0.51:6201/sdc_"" with 100.0 weight got id 1
       # swift-ring-builder container.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6001 --device sdb --weight 100
-      Device d2r1z3-10.0.0.52:6001R10.0.0.52:6001/sdb_"" with 100.0 weight got id 2
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6201 --device sdb --weight 100
+      Device d2r1z3-10.0.0.52:6201R10.0.0.52:6201/sdb_"" with 100.0 weight got id 2
       # swift-ring-builder container.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6001 --device sdc --weight 100
-      Device d3r1z4-10.0.0.52:6001R10.0.0.52:6001/sdc_"" with 100.0 weight got id 3
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6201 --device sdc --weight 100
+      Device d3r1z4-10.0.0.52:6201R10.0.0.52:6201/sdc_"" with 100.0 weight got id 3
 
 #. Verify the ring contents:
 
@@ -155,10 +155,10 @@
       The minimum number of hours before a partition can be reassigned is 1
       The overload factor is 0.00% (0.000000)
       Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
-                   0       1     1       10.0.0.51  6001       10.0.0.51              6001      sdb  100.00          0 -100.00
-                   1       1     1       10.0.0.51  6001       10.0.0.51              6001      sdc  100.00          0 -100.00
-                   2       1     2       10.0.0.52  6001       10.0.0.52              6001      sdb  100.00          0 -100.00
-                   3       1     2       10.0.0.52  6001       10.0.0.52              6001      sdc  100.00          0 -100.00
+                   0       1     1       10.0.0.51  6201       10.0.0.51              6201      sdb  100.00          0 -100.00
+                   1       1     1       10.0.0.51  6201       10.0.0.51              6201      sdc  100.00          0 -100.00
+                   2       1     2       10.0.0.52  6201       10.0.0.52              6201      sdb  100.00          0 -100.00
+                   3       1     2       10.0.0.52  6201       10.0.0.52              6201      sdc  100.00          0 -100.00
 
 #. Rebalance the ring:
 
@@ -190,7 +190,7 @@
    .. code-block:: console
 
       # swift-ring-builder object.builder \
-        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6000 \
+        add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
         --device DEVICE_NAME --weight DEVICE_WEIGHT
 
    Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
@@ -202,7 +202,7 @@
    .. code-block:: console
 
       # swift-ring-builder object.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device sdb --weight 100
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6200 --device sdb --weight 100
 
    Repeat this command for each storage device on each storage node. In the
    example architecture, use the command in four variations:
@@ -210,17 +210,17 @@
    .. code-block:: console
 
       # swift-ring-builder object.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device sdb --weight 100
-      Device d0r1z1-10.0.0.51:6000R10.0.0.51:6000/sdb_"" with 100.0 weight got id 0
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6200 --device sdb --weight 100
+      Device d0r1z1-10.0.0.51:6200R10.0.0.51:6200/sdb_"" with 100.0 weight got id 0
       # swift-ring-builder object.builder add \
-        --region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device sdc --weight 100
-      Device d1r1z2-10.0.0.51:6000R10.0.0.51:6000/sdc_"" with 100.0 weight got id 1
+        --region 1 --zone 1 --ip 10.0.0.51 --port 6200 --device sdc --weight 100
+      Device d1r1z2-10.0.0.51:6200R10.0.0.51:6200/sdc_"" with 100.0 weight got id 1
       # swift-ring-builder object.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6000 --device sdb --weight 100
-      Device d2r1z3-10.0.0.52:6000R10.0.0.52:6000/sdb_"" with 100.0 weight got id 2
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6200 --device sdb --weight 100
+      Device d2r1z3-10.0.0.52:6200R10.0.0.52:6200/sdb_"" with 100.0 weight got id 2
       # swift-ring-builder object.builder add \
-        --region 1 --zone 2 --ip 10.0.0.52 --port 6000 --device sdc --weight 100
-      Device d3r1z4-10.0.0.52:6000R10.0.0.52:6000/sdc_"" with 100.0 weight got id 3
+        --region 1 --zone 2 --ip 10.0.0.52 --port 6200 --device sdc --weight 100
+      Device d3r1z4-10.0.0.52:6200R10.0.0.52:6200/sdc_"" with 100.0 weight got id 3
 
 #. Verify the ring contents:
 
@@ -232,10 +232,10 @@
       The minimum number of hours before a partition can be reassigned is 1
       The overload factor is 0.00% (0.000000)
       Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
-                   0       1     1       10.0.0.51  6000       10.0.0.51              6000      sdb  100.00          0 -100.00
-                   1       1     1       10.0.0.51  6000       10.0.0.51              6000      sdc  100.00          0 -100.00
-                   2       1     2       10.0.0.52  6000       10.0.0.52              6000      sdb  100.00          0 -100.00
-                   3       1     2       10.0.0.52  6000       10.0.0.52              6000      sdc  100.00          0 -100.00
+                   0       1     1       10.0.0.51  6200       10.0.0.51              6200      sdb  100.00          0 -100.00
+                   1       1     1       10.0.0.51  6200       10.0.0.51              6200      sdc  100.00          0 -100.00
+                   2       1     2       10.0.0.52  6200       10.0.0.52              6200      sdb  100.00          0 -100.00
+                   3       1     2       10.0.0.52  6200       10.0.0.52              6200      sdc  100.00          0 -100.00
 
 #. Rebalance the ring:
 
diff -Nru swift-2.10.1/install-guide/source/storage-include1.txt swift-2.10.2/install-guide/source/storage-include1.txt
--- swift-2.10.1/install-guide/source/storage-include1.txt	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/install-guide/source/storage-include1.txt	2017-05-26 21:40:54.000000000 +0200
@@ -9,7 +9,7 @@
      [DEFAULT]
      ...
      bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
-     bind_port = 6002
+     bind_port = 6202
      user = swift
      swift_dir = /etc/swift
      devices = /srv/node
diff -Nru swift-2.10.1/install-guide/source/storage-include2.txt swift-2.10.2/install-guide/source/storage-include2.txt
--- swift-2.10.1/install-guide/source/storage-include2.txt	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/install-guide/source/storage-include2.txt	2017-05-26 21:40:54.000000000 +0200
@@ -9,7 +9,7 @@
      [DEFAULT]
      ...
      bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
-     bind_port = 6001
+     bind_port = 6201
      user = swift
      swift_dir = /etc/swift
      devices = /srv/node
diff -Nru swift-2.10.1/install-guide/source/storage-include3.txt swift-2.10.2/install-guide/source/storage-include3.txt
--- swift-2.10.1/install-guide/source/storage-include3.txt	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/install-guide/source/storage-include3.txt	2017-05-26 21:40:54.000000000 +0200
@@ -9,7 +9,7 @@
      [DEFAULT]
      ...
      bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
-     bind_port = 6000
+     bind_port = 6200
      user = swift
      swift_dir = /etc/swift
      devices = /srv/node
diff -Nru swift-2.10.1/releasenotes/notes/2_10_2_release-eb9abaa82fcc8ebc.yaml swift-2.10.2/releasenotes/notes/2_10_2_release-eb9abaa82fcc8ebc.yaml
--- swift-2.10.1/releasenotes/notes/2_10_2_release-eb9abaa82fcc8ebc.yaml	1970-01-01 01:00:00.000000000 +0100
+++ swift-2.10.2/releasenotes/notes/2_10_2_release-eb9abaa82fcc8ebc.yaml	2017-05-26 21:40:54.000000000 +0200
@@ -0,0 +1,61 @@
+---
+fixes:
+  - >
+    Improvements in key parts of the consistency engine
+
+    - Optimized the common case for hashing filesystem trees, thus
+      eliminating a lot of extraneous disk I/O.
+
+    - Updated the `hashes.pkl` file format to include timestamp information
+      for race detection. Also simplified hashing logic to prevent race
+      conditions and optimize for the common case.
+
+    Upgrade Impact: If you upgrade and roll back, you must delete all
+    `hashes.pkl` files.
+
+  - >
+    If using erasure coding with ISA-L in rs_vand mode and 5 or more parity
+    fragments, Swift will emit a warning. This is a configuration that is
+    known to harm data durability. In a future release, this warning will be
+    upgraded to an error unless the policy is marked as deprecated. All data
+    in an erasure code storage policy using isa_l_rs_vand with 5 or more
+    parity should be migrated as soon as possible. Please see
+    https://bugs.launchpad.net/swift/+bug/1639691 for more information.
+
+  - >
+    Fixed a bug where the ring builder would not allow removal of a device
+    when min_part_seconds_left was greater than zero.
+
+  - >
+    PUT subrequests generated from a client-side COPY will now properly log
+    the SSC (server-side copy) Swift source field. See
+    https://docs.openstack.org/developer/swift/logs.html#swift-source for
+    more information.
+
+  - >
+    Rings with min_part_hours set to zero will now only move one partition
+    replica per rebalance, thus matching behavior when min_part_hours is
+    greater than zero.
+
+  - >
+    Correctly send 412 Precondition Failed if a user sends an
+    invalid copy destination. Previously Swift would send a 500
+    Internal Server Error.
+
+  - >
+    Fixed error where a container drive error resulted in double space
+    usage on rest drives. When drive with container or account database
+    is unmounted, the bug would create handoff replicas on all remaining
+    drives, increasing the drive space used and filling the cluster.
+
+  - >
+    Account and container databases will now be quarantined if the
+    database schema has been corrupted.
+
+  - >
+    Ensure update of the container by object-updater, removing a rare
+    possibility that objects would never be added to a container listing.
+
+  - Fixed some minor test compatibility issues.
+
+  - Updated docs to reference appropriate ports.
diff -Nru swift-2.10.1/swift/cli/ringbuilder.py swift-2.10.2/swift/cli/ringbuilder.py
--- swift-2.10.1/swift/cli/ringbuilder.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/cli/ringbuilder.py	2017-05-26 21:40:54.000000000 +0200
@@ -851,15 +851,8 @@
             handler.setFormatter(formatter)
             logger.addHandler(handler)
 
-        if builder.min_part_seconds_left > 0 and not options.force:
-            print('No partitions could be reassigned.')
-            print('The time between rebalances must be at least '
-                  'min_part_hours: %s hours (%s remaining)' % (
-                      builder.min_part_hours,
-                      timedelta(seconds=builder.min_part_seconds_left)))
-            exit(EXIT_WARNING)
-
         devs_changed = builder.devs_changed
+        min_part_seconds_left = builder.min_part_seconds_left
         try:
             last_balance = builder.get_balance()
             parts, balance, removed_devs = builder.rebalance(seed=get_seed(3))
@@ -874,7 +867,13 @@
             exit(EXIT_ERROR)
         if not (parts or options.force or removed_devs):
             print('No partitions could be reassigned.')
-            print('There is no need to do so at this time')
+            if min_part_seconds_left > 0:
+                print('The time between rebalances must be at least '
+                      'min_part_hours: %s hours (%s remaining)' % (
+                          builder.min_part_hours,
+                          timedelta(seconds=builder.min_part_seconds_left)))
+            else:
+                print('There is no need to do so at this time')
             exit(EXIT_WARNING)
         # If we set device's weight to zero, currently balance will be set
         # special value(MAX_BALANCE) until zero weighted device return all
diff -Nru swift-2.10.1/swift/common/db.py swift-2.10.2/swift/common/db.py
--- swift-2.10.1/swift/common/db.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/db.py	2017-05-26 21:40:54.000000000 +0200
@@ -331,6 +331,8 @@
         """
         if 'database disk image is malformed' in str(exc_value):
             exc_hint = 'malformed'
+        elif 'malformed database schema' in str(exc_value):
+            exc_hint = 'malformed'
         elif 'file is encrypted or is not a database' in str(exc_value):
             exc_hint = 'corrupted'
         elif 'disk I/O error' in str(exc_value):
diff -Nru swift-2.10.1/swift/common/db_replicator.py swift-2.10.2/swift/common/db_replicator.py
--- swift-2.10.1/swift/common/db_replicator.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/db_replicator.py	2017-05-26 21:40:54.000000000 +0200
@@ -533,7 +533,7 @@
         more_nodes = self.ring.get_more_nodes(int(partition))
         if not local_dev:
             # Check further if local device is a handoff node
-            for node in more_nodes:
+            for node in self.ring.get_more_nodes(int(partition)):
                 if node['id'] == node_id:
                     local_dev = node
                     break
@@ -549,7 +549,13 @@
                 success = self._repl_to_node(node, broker, partition, info,
                                              different_region)
             except DriveNotMounted:
-                repl_nodes.append(next(more_nodes))
+                try:
+                    repl_nodes.append(next(more_nodes))
+                except StopIteration:
+                    self.logger.error(
+                        _('ERROR There are not enough handoff nodes to reach '
+                          'replica count for partition %s'),
+                        partition)
                 self.logger.error(_('ERROR Remote drive not mounted %s'), node)
             except (Exception, Timeout):
                 self.logger.exception(_('ERROR syncing %(file)s with node'
diff -Nru swift-2.10.1/swift/common/middleware/copy.py swift-2.10.2/swift/common/middleware/copy.py
--- swift-2.10.1/swift/common/middleware/copy.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/middleware/copy.py	2017-05-26 21:40:54.000000000 +0200
@@ -139,7 +139,7 @@
 from swift.common.utils import get_logger, \
     config_true_value, FileLikeIter, read_conf_dir, close_if_possible
 from swift.common.swob import Request, HTTPPreconditionFailed, \
-    HTTPRequestEntityTooLarge, HTTPBadRequest
+    HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException
 from swift.common.http import HTTP_MULTIPLE_CHOICES, HTTP_CREATED, \
     is_success, HTTP_OK
 from swift.common.constraints import check_account_format, MAX_FILE_SIZE
@@ -318,21 +318,25 @@
         self.container_name = container
         self.object_name = obj
 
-        # Save off original request method (COPY/POST) in case it gets mutated
-        # into PUT during handling. This way logging can display the method
-        # the client actually sent.
-        req.environ['swift.orig_req_method'] = req.method
-
-        if req.method == 'PUT' and req.headers.get('X-Copy-From'):
-            return self.handle_PUT(req, start_response)
-        elif req.method == 'COPY':
-            return self.handle_COPY(req, start_response)
-        elif req.method == 'POST' and self.object_post_as_copy:
-            return self.handle_object_post_as_copy(req, start_response)
-        elif req.method == 'OPTIONS':
-            # Does not interfere with OPTIONS response from (account,container)
-            # servers and /info response.
-            return self.handle_OPTIONS(req, start_response)
+        try:
+            # In some cases, save off original request method since it gets
+            # mutated into PUT during handling. This way logging can display
+            # the method the client actually sent.
+            if req.method == 'PUT' and req.headers.get('X-Copy-From'):
+                return self.handle_PUT(req, start_response)
+            elif req.method == 'COPY':
+                req.environ['swift.orig_req_method'] = req.method
+                return self.handle_COPY(req, start_response)
+            elif req.method == 'POST' and self.object_post_as_copy:
+                req.environ['swift.orig_req_method'] = req.method
+                return self.handle_object_post_as_copy(req, start_response)
+            elif req.method == 'OPTIONS':
+                # Does not interfere with OPTIONS response from
+                # (account,container) servers and /info response.
+                return self.handle_OPTIONS(req, start_response)
+
+        except HTTPException as e:
+            return e(req.environ, start_response)
 
         return self.app(env, start_response)
 
diff -Nru swift-2.10.1/swift/common/ring/builder.py swift-2.10.2/swift/common/ring/builder.py
--- swift-2.10.1/swift/common/ring/builder.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/ring/builder.py	2017-05-26 21:40:54.000000000 +0200
@@ -108,6 +108,8 @@
         # a device overrides this behavior as it's assumed that's only
         # done because of device failure.
         self._last_part_moves = None
+        # _part_moved_bitmap record parts have been moved
+        self._part_moved_bitmap = None
         # _last_part_moves_epoch indicates the time the offsets in
         # _last_part_moves is based on.
         self._last_part_moves_epoch = 0
@@ -125,6 +127,19 @@
             # silence "no handler for X" error messages
             self.logger.addHandler(NullHandler())
 
+    def _set_part_moved(self, part):
+        self._last_part_moves[part] = 0
+        byte, bit = divmod(part, 8)
+        self._part_moved_bitmap[byte] |= (128 >> bit)
+
+    def _has_part_moved(self, part):
+        byte, bit = divmod(part, 8)
+        return bool(self._part_moved_bitmap[byte] & (128 >> bit))
+
+    def _can_part_move(self, part):
+        return (self._last_part_moves[part] >= self.min_part_hours and
+                not self._has_part_moved(part))
+
     @contextmanager
     def debug(self):
         """
@@ -437,6 +452,7 @@
         if self._last_part_moves is None:
             self.logger.debug("New builder; performing initial balance")
             self._last_part_moves = array('B', itertools.repeat(0, self.parts))
+        self._part_moved_bitmap = bytearray(max(2 ** (self.part_power - 3), 1))
         self._update_last_part_moves()
 
         replica_plan = self._build_replica_plan()
@@ -876,7 +892,7 @@
                     dev_id = self._replica2part2dev[replica][part]
                     if dev_id in dev_ids:
                         self._replica2part2dev[replica][part] = NONE_DEV
-                        self._last_part_moves[part] = 0
+                        self._set_part_moved(part)
                         assign_parts[part].append(replica)
                         self.logger.debug(
                             "Gathered %d/%d from dev %d [dev removed]",
@@ -964,7 +980,7 @@
         # Now we gather partitions that are "at risk" because they aren't
         # currently sufficient spread out across the cluster.
         for part in range(self.parts):
-            if self._last_part_moves[part] < self.min_part_hours:
+            if (not self._can_part_move(part)):
                 continue
             # First, add up the count of replicas at each tier for each
             # partition.
@@ -996,7 +1012,7 @@
                 # has more than one replica of a part assigned to it - which
                 # would have only been possible on rings built with an older
                 # version of the code
-                if (self._last_part_moves[part] < self.min_part_hours and
+                if (not self._can_part_move(part) and
                         not replicas_at_tier[dev['tiers'][-1]] > 1):
                     continue
                 dev['parts_wanted'] += 1
@@ -1008,7 +1024,7 @@
                 self._replica2part2dev[replica][part] = NONE_DEV
                 for tier in dev['tiers']:
                     replicas_at_tier[tier] -= 1
-                self._last_part_moves[part] = 0
+                self._set_part_moved(part)
 
     def _gather_parts_for_balance_can_disperse(self, assign_parts, start,
                                                replica_plan):
@@ -1025,7 +1041,7 @@
         # they have more partitions than their parts_wanted.
         for offset in range(self.parts):
             part = (start + offset) % self.parts
-            if self._last_part_moves[part] < self.min_part_hours:
+            if (not self._can_part_move(part)):
                 continue
             # For each part we'll look at the devices holding those parts and
             # see if any are overweight, keeping track of replicas_at_tier as
@@ -1048,7 +1064,7 @@
             overweight_dev_replica.sort(
                 key=lambda dr: dr[0]['parts_wanted'])
             for dev, replica in overweight_dev_replica:
-                if self._last_part_moves[part] < self.min_part_hours:
+                if (not self._can_part_move(part)):
                     break
                 if any(replica_plan[tier]['min'] <=
                        replicas_at_tier[tier] <
@@ -1067,7 +1083,7 @@
                 self._replica2part2dev[replica][part] = NONE_DEV
                 for tier in dev['tiers']:
                     replicas_at_tier[tier] -= 1
-                self._last_part_moves[part] = 0
+                self._set_part_moved(part)
 
     def _gather_parts_for_balance(self, assign_parts, replica_plan):
         """
@@ -1107,7 +1123,7 @@
         """
         for offset in range(self.parts):
             part = (start + offset) % self.parts
-            if self._last_part_moves[part] < self.min_part_hours:
+            if (not self._can_part_move(part)):
                 continue
             overweight_dev_replica = []
             for replica in self._replicas_for_part(part):
@@ -1124,7 +1140,7 @@
             overweight_dev_replica.sort(
                 key=lambda dr: dr[0]['parts_wanted'])
             for dev, replica in overweight_dev_replica:
-                if self._last_part_moves[part] < self.min_part_hours:
+                if (not self._can_part_move(part)):
                     break
                 # this is the most overweight_device holding a replica of this
                 # part we don't know where it's going to end up - but we'll
@@ -1136,7 +1152,7 @@
                     "Gathered %d/%d from dev %d [weight forced]",
                     part, replica, dev['id'])
                 self._replica2part2dev[replica][part] = NONE_DEV
-                self._last_part_moves[part] = 0
+                self._set_part_moved(part)
 
     def _reassign_parts(self, reassign_parts, replica_plan):
         """
diff -Nru swift-2.10.1/swift/common/storage_policy.py swift-2.10.2/swift/common/storage_policy.py
--- swift-2.10.1/swift/common/storage_policy.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/storage_policy.py	2017-05-26 21:40:54.000000000 +0200
@@ -12,8 +12,10 @@
 # limitations under the License.
 
 
+import logging
 import os
 import string
+import sys
 import textwrap
 import six
 from six.moves.configparser import ConfigParser
@@ -453,6 +455,26 @@
             raise PolicyError('Invalid ec_object_segment_size %r' %
                               ec_segment_size, index=self.idx)
 
+        if self._ec_type == 'isa_l_rs_vand' and self._ec_nparity >= 5:
+            logger = logging.getLogger("swift.common.storage_policy")
+            if not logger.handlers:
+                # If nothing else, log to stderr
+                logger.addHandler(logging.StreamHandler(sys.__stderr__))
+            logger.warning(
+                'Storage policy %s uses an EC configuration known to harm '
+                'data durability. Any data in this policy should be migrated. '
+                'See https://bugs.launchpad.net/swift/+bug/1639691 for '
+                'more information.' % self.name)
+            if not is_deprecated:
+                # TODO: To fully close bug 1639691, uncomment the raise and
+                # removing the warning below. This will be in the Pike release
+                # at the earliest.
+                logger.warning(
+                    'In a future release, this will prevent services from '
+                    'starting unless the policy is marked as deprecated.')
+                # raise PolicyError('Storage policy %s MUST be deprecated' %
+                #                   self.name)
+
         # Initialize PyECLib EC backend
         try:
             self.pyeclib_driver = \
diff -Nru swift-2.10.1/swift/common/wsgi.py swift-2.10.2/swift/common/wsgi.py
--- swift-2.10.1/swift/common/wsgi.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/common/wsgi.py	2017-05-26 21:40:54.000000000 +0200
@@ -1115,8 +1115,7 @@
                  'SERVER_PROTOCOL', 'swift.cache', 'swift.source',
                  'swift.trans_id', 'swift.authorize_override',
                  'swift.authorize', 'HTTP_X_USER_ID', 'HTTP_X_PROJECT_ID',
-                 'HTTP_REFERER', 'swift.orig_req_method', 'swift.log_info',
-                 'swift.infocache'):
+                 'HTTP_REFERER', 'swift.infocache'):
         if name in env:
             newenv[name] = env[name]
     if method:
diff -Nru swift-2.10.1/swift/obj/diskfile.py swift-2.10.2/swift/obj/diskfile.py
--- swift-2.10.1/swift/obj/diskfile.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/obj/diskfile.py	2017-05-26 21:40:54.000000000 +0200
@@ -31,6 +31,7 @@
 """
 
 import six.moves.cPickle as pickle
+import copy
 import errno
 import fcntl
 import json
@@ -41,7 +42,7 @@
 import logging
 import traceback
 import xattr
-from os.path import basename, dirname, exists, getmtime, join, splitext
+from os.path import basename, dirname, exists, join, splitext
 from random import shuffle
 from tempfile import mkstemp
 from contextlib import contextmanager
@@ -228,6 +229,48 @@
     return to_dir
 
 
+def read_hashes(partition_dir):
+    """
+    Read the existing hashes.pkl
+
+    :returns: a dict, the suffix hashes (if any), the key 'valid' will be False
+              if hashes.pkl is corrupt, cannot be read or does not exist
+    """
+    hashes_file = join(partition_dir, HASH_FILE)
+    hashes = {'valid': False}
+    try:
+        with open(hashes_file, 'rb') as hashes_fp:
+            pickled_hashes = hashes_fp.read()
+    except (IOError, OSError):
+        pass
+    else:
+        try:
+            hashes = pickle.loads(pickled_hashes)
+        except Exception:
+            # pickle.loads() can raise a wide variety of exceptions when
+            # given invalid input depending on the way in which the
+            # input is invalid.
+            pass
+    # hashes.pkl w/o valid updated key is "valid" but "forever old"
+    hashes.setdefault('valid', True)
+    hashes.setdefault('updated', -1)
+    return hashes
+
+
+def write_hashes(partition_dir, hashes):
+    """
+    Write hashes to hashes.pkl
+
+    The updated key is added to hashes before it is written.
+    """
+    hashes_file = join(partition_dir, HASH_FILE)
+    # 'valid' key should always be set by the caller; however, if there's a bug
+    # setting invalid is most safe
+    hashes.setdefault('valid', False)
+    hashes['updated'] = time.time()
+    write_pickle(hashes, hashes_file, partition_dir, PICKLE_PROTOCOL)
+
+
 def consolidate_hashes(partition_dir):
     """
     Take what's in hashes.pkl and hashes.invalid, combine them, write the
@@ -236,62 +279,31 @@
     :param suffix_dir: absolute path to partition dir containing hashes.pkl
                        and hashes.invalid
 
-    :returns: the hashes, or None if there's no hashes.pkl.
+    :returns: a dict, the suffix hashes (if any), the key 'valid' will be False
+              if hashes.pkl is corrupt, cannot be read or does not exist
     """
-    hashes_file = join(partition_dir, HASH_FILE)
     invalidations_file = join(partition_dir, HASH_INVALIDATIONS_FILE)
 
-    if not os.path.exists(hashes_file):
-        if os.path.exists(invalidations_file):
-            # no hashes at all -> everything's invalid, so empty the file with
-            # the invalid suffixes in it, if it exists
-            try:
-                with open(invalidations_file, 'wb'):
-                    pass
-            except OSError as e:
-                if e.errno != errno.ENOENT:
-                    raise
-        return None
-
     with lock_path(partition_dir):
-        try:
-            with open(hashes_file, 'rb') as hashes_fp:
-                pickled_hashes = hashes_fp.read()
-        except (IOError, OSError):
-            hashes = {}
-        else:
-            try:
-                hashes = pickle.loads(pickled_hashes)
-            except Exception:
-                # pickle.loads() can raise a wide variety of exceptions when
-                # given invalid input depending on the way in which the
-                # input is invalid.
-                hashes = None
+        hashes = read_hashes(partition_dir)
 
-        modified = False
+        found_invalidation_entry = False
         try:
             with open(invalidations_file, 'rb') as inv_fh:
                 for line in inv_fh:
+                    found_invalidation_entry = True
                     suffix = line.strip()
-                    if hashes is not None and \
-                            hashes.get(suffix, '') is not None:
-                        hashes[suffix] = None
-                        modified = True
+                    hashes[suffix] = None
         except (IOError, OSError) as e:
             if e.errno != errno.ENOENT:
                 raise
 
-        if modified:
-            write_pickle(hashes, hashes_file, partition_dir, PICKLE_PROTOCOL)
-
-        # Now that all the invalidations are reflected in hashes.pkl, it's
-        # safe to clear out the invalidations file.
-        try:
+        if found_invalidation_entry:
+            write_hashes(partition_dir, hashes)
+            # Now that all the invalidations are reflected in hashes.pkl, it's
+            # safe to clear out the invalidations file.
             with open(invalidations_file, 'wb') as inv_fh:
                 pass
-        except OSError as e:
-            if e.errno != errno.ENOENT:
-                raise
 
         return hashes
 
@@ -306,10 +318,6 @@
 
     suffix = basename(suffix_dir)
     partition_dir = dirname(suffix_dir)
-    hashes_file = join(partition_dir, HASH_FILE)
-    if not os.path.exists(hashes_file):
-        return
-
     invalidations_file = join(partition_dir, HASH_INVALIDATIONS_FILE)
     with lock_path(partition_dir):
         with open(invalidations_file, 'ab') as inv_fh:
@@ -1002,8 +1010,14 @@
         """
         raise NotImplementedError
 
-    def _get_hashes(self, partition_path, recalculate=None, do_listdir=False,
-                    reclaim_age=None):
+    def _get_hashes(self, *args, **kwargs):
+        hashed, hashes = self.__get_hashes(*args, **kwargs)
+        hashes.pop('updated', None)
+        hashes.pop('valid', None)
+        return hashed, hashes
+
+    def __get_hashes(self, partition_path, recalculate=None, do_listdir=False,
+                     reclaim_age=None):
         """
         Get a list of hashes for the suffix dir.  do_listdir causes it to
         mistrust the hash cache for suffix existence at the (unexpectedly high)
@@ -1022,29 +1036,36 @@
         hashed = 0
         hashes_file = join(partition_path, HASH_FILE)
         modified = False
-        force_rewrite = False
-        hashes = {}
-        mtime = -1
+        orig_hashes = {'valid': False}
 
         if recalculate is None:
             recalculate = []
 
         try:
-            mtime = getmtime(hashes_file)
-        except OSError as e:
-            if e.errno != errno.ENOENT:
-                raise
-
-        try:
-            hashes = self.consolidate_hashes(partition_path)
+            orig_hashes = self.consolidate_hashes(partition_path)
         except Exception:
+            self.logger.warning('Unable to read %r', hashes_file,
+                                exc_info=True)
+
+        if not orig_hashes['valid']:
+            # This is the only path to a valid hashes from invalid read (e.g.
+            # does not exist, corrupt, etc.).  Moreover, in order to write this
+            # valid hashes we must read *the exact same* invalid state or we'll
+            # trigger race detection.
             do_listdir = True
-            force_rewrite = True
+            hashes = {'valid': True}
+            # If the exception handling around consolidate_hashes fired we're
+            # going to do a full rehash regardless; but we need to avoid
+            # needless recursion if the on-disk hashes.pkl is actually readable
+            # (worst case is consolidate_hashes keeps raising exceptions and we
+            # eventually run out of stack).
+            # N.B. orig_hashes invalid only effects new parts and error/edge
+            # conditions - so try not to get overly caught up trying to
+            # optimize it out unless you manage to convince yourself there's a
+            # bad behavior.
+            orig_hashes = read_hashes(partition_path)
         else:
-            if hashes is None:  # no hashes.pkl file; let's build it
-                do_listdir = True
-                force_rewrite = True
-                hashes = {}
+            hashes = copy.deepcopy(orig_hashes)
 
         if do_listdir:
             for suff in os.listdir(partition_path):
@@ -1066,13 +1087,11 @@
                 modified = True
         if modified:
             with lock_path(partition_path):
-                if force_rewrite or not exists(hashes_file) or \
-                        getmtime(hashes_file) == mtime:
-                    write_pickle(
-                        hashes, hashes_file, partition_path, PICKLE_PROTOCOL)
+                if read_hashes(partition_path) == orig_hashes:
+                    write_hashes(partition_path, hashes)
                     return hashed, hashes
-            return self._get_hashes(partition_path, recalculate, do_listdir,
-                                    reclaim_age)
+            return self.__get_hashes(partition_path, recalculate, do_listdir,
+                                     reclaim_age)
         else:
             return hashed, hashes
 
diff -Nru swift-2.10.1/swift/obj/updater.py swift-2.10.2/swift/obj/updater.py
--- swift-2.10.1/swift/obj/updater.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/swift/obj/updater.py	2017-05-26 21:40:54.000000000 +0200
@@ -31,8 +31,7 @@
 from swift.common.daemon import Daemon
 from swift.common.storage_policy import split_policy_string, PolicyError
 from swift.obj.diskfile import get_tmp_dir, ASYNCDIR_BASE
-from swift.common.http import is_success, HTTP_NOT_FOUND, \
-    HTTP_INTERNAL_SERVER_ERROR
+from swift.common.http import is_success, HTTP_INTERNAL_SERVER_ERROR
 
 
 class ObjectUpdater(Daemon):
@@ -269,8 +268,13 @@
             with Timeout(self.node_timeout):
                 resp = conn.getresponse()
                 resp.read()
-                success = (is_success(resp.status) or
-                           resp.status == HTTP_NOT_FOUND)
+                success = is_success(resp.status)
+                if not success:
+                    self.logger.error(
+                        _('Error code %(status)d is returned from remote '
+                          'server %(ip)s: %(port)s / %(device)s'),
+                        {'status': resp.status, 'ip': node['ip'],
+                         'port': node['port'], 'device': node['device']})
                 return (success, node['id'])
         except (Exception, Timeout):
             self.logger.exception(_('ERROR with remote server '
diff -Nru swift-2.10.1/test/functional/__init__.py swift-2.10.2/test/functional/__init__.py
--- swift-2.10.1/test/functional/__init__.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/functional/__init__.py	2017-05-26 21:40:54.000000000 +0200
@@ -40,7 +40,7 @@
 from swift.common.middleware.memcache import MemcacheMiddleware
 from swift.common.storage_policy import parse_storage_policies, PolicyError
 
-from test import get_config
+from test import get_config, listen_zero
 from test.functional.swift_test_client import Account, Connection, Container, \
     ResponseError
 # This has the side effect of mocking out the xattr module so that unit tests
@@ -259,7 +259,7 @@
             device = 'sd%c1' % chr(len(obj_sockets) + ord('a'))
             utils.mkdirs(os.path.join(_testdir, 'sda1'))
             utils.mkdirs(os.path.join(_testdir, 'sda1', 'tmp'))
-            obj_socket = eventlet.listen(('localhost', 0))
+            obj_socket = listen_zero()
             obj_sockets.append(obj_socket)
             dev['port'] = obj_socket.getsockname()[1]
             dev['ip'] = '127.0.0.1'
@@ -270,7 +270,7 @@
     else:
         # make default test ring, 2 replicas, 4 partitions, 2 devices
         _info('No source object ring file, creating 2rep/4part/2dev ring')
-        obj_sockets = [eventlet.listen(('localhost', 0)) for _ in (0, 1)]
+        obj_sockets = [listen_zero() for _ in (0, 1)]
         ring_data = ring.RingData(
             [[0, 1, 0, 1], [1, 0, 1, 0]],
             [{'id': 0, 'zone': 0, 'device': 'sda1', 'ip': '127.0.0.1',
@@ -416,7 +416,7 @@
     # We create the proxy server listening socket to get its port number so
     # that we can add it as the "auth_port" value for the functional test
     # clients.
-    prolis = eventlet.listen(('localhost', 0))
+    prolis = listen_zero()
     _test_socks.append(prolis)
 
     # The following set of configuration values is used both for the
@@ -472,10 +472,10 @@
         config['object_post_as_copy'] = str(object_post_as_copy)
         _debug('Setting object_post_as_copy to %r' % object_post_as_copy)
 
-    acc1lis = eventlet.listen(('localhost', 0))
-    acc2lis = eventlet.listen(('localhost', 0))
-    con1lis = eventlet.listen(('localhost', 0))
-    con2lis = eventlet.listen(('localhost', 0))
+    acc1lis = listen_zero()
+    acc2lis = listen_zero()
+    con1lis = listen_zero()
+    con2lis = listen_zero()
     _test_socks += [acc1lis, acc2lis, con1lis, con2lis] + obj_sockets
 
     account_ring_path = os.path.join(_testdir, 'account.ring.gz')
diff -Nru swift-2.10.1/test/functional/tests.py swift-2.10.2/test/functional/tests.py
--- swift-2.10.1/test/functional/tests.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/functional/tests.py	2017-05-26 21:40:54.000000000 +0200
@@ -1600,6 +1600,12 @@
                          cfg={'destination': Utils.create_name()}))
         self.assert_status(412)
 
+        # too many slashes
+        self.assertFalse(file_item.copy(Utils.create_name(),
+                         Utils.create_name(),
+                         cfg={'destination': '//%s' % Utils.create_name()}))
+        self.assert_status(412)
+
     def testCopyFromHeader(self):
         source_filename = Utils.create_name()
         file_item = self.env.container.file(source_filename)
diff -Nru swift-2.10.1/test/__init__.py swift-2.10.2/test/__init__.py
--- swift-2.10.1/test/__init__.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/__init__.py	2017-05-26 21:40:54.000000000 +0200
@@ -33,6 +33,8 @@
             return result
         return result[:_MAX_LENGTH] + ' [truncated]...'
 
+from eventlet.green import socket
+
 # make unittests pass on all locale
 import swift
 setattr(swift, 'gettext_', lambda x: x)
@@ -72,3 +74,16 @@
             print('Unable to read test config %s - section %s not found'
                   % (config_file, section_name), file=sys.stderr)
     return config
+
+
+def listen_zero():
+    """
+    The eventlet.listen() always sets SO_REUSEPORT, so when called with
+    ("localhost",0), instead of returning unique ports it can return the
+    same port twice. That causes our tests to fail, so open-code it here
+    without SO_REUSEPORT.
+    """
+    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+    sock.bind(("127.0.0.1", 0))
+    sock.listen(50)
+    return sock
diff -Nru swift-2.10.1/test/probe/test_object_async_update.py swift-2.10.2/test/probe/test_object_async_update.py
--- swift-2.10.1/test/probe/test_object_async_update.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/probe/test_object_async_update.py	2017-05-26 21:40:54.000000000 +0200
@@ -18,7 +18,10 @@
 from unittest import main
 from uuid import uuid4
 
+from nose import SkipTest
+
 from swiftclient import client
+from swiftclient.exceptions import ClientException
 
 from swift.common import direct_client
 from swift.common.manager import Manager
@@ -58,6 +61,64 @@
             cnode, cpart, self.account, container)[1]]
         self.assertTrue(obj in objs)
 
+    def test_missing_container(self):
+        # In this test, we need to put container at handoff devices, so we
+        # need container devices more than replica count
+        if len(self.container_ring.devs) <= self.container_ring.replica_count:
+            raise SkipTest('Need devices more that replica count')
+
+        container = 'container-%s' % uuid4()
+        cpart, cnodes = self.container_ring.get_nodes(self.account, container)
+
+        # Kill all primary container servers
+        for cnode in cnodes:
+            kill_server((cnode['ip'], cnode['port']), self.ipport2server)
+
+        # Create container, and all of its replicas are placed at handoff
+        # device
+        try:
+            client.put_container(self.url, self.token, container)
+        except ClientException as err:
+            # if the cluster doesn't have enough devices, swift may return
+            # error (ex. When we only have 4 devices in 3-replica cluster).
+            self.assertEqual(err.http_status, 503)
+
+        # Assert handoff device has a container replica
+        another_cnode = self.container_ring.get_more_nodes(cpart).next()
+        direct_client.direct_get_container(
+            another_cnode, cpart, self.account, container)
+
+        # Restart all primary container servers
+        for cnode in cnodes:
+            start_server((cnode['ip'], cnode['port']), self.ipport2server)
+
+        # Create container/obj
+        obj = 'object-%s' % uuid4()
+        client.put_object(self.url, self.token, container, obj, '')
+
+        # Run the object-updater
+        Manager(['object-updater']).once()
+
+        # Run the container-replicator, and now, container replicas
+        # at handoff device get moved to primary servers
+        Manager(['container-replicator']).once()
+
+        # Assert container replicas in primary servers, just moved by
+        # replicator don't know about the object
+        for cnode in cnodes:
+            self.assertFalse(direct_client.direct_get_container(
+                cnode, cpart, self.account, container)[1])
+
+        # Re-run the object-updaters and now container replicas in primary
+        # container servers should get updated
+        Manager(['object-updater']).once()
+
+        # Assert all primary container servers know about container/obj
+        for cnode in cnodes:
+            objs = [o['name'] for o in direct_client.direct_get_container(
+                    cnode, cpart, self.account, container)[1]]
+            self.assertIn(obj, objs)
+
 
 class TestUpdateOverrides(ReplProbeTest):
     """
diff -Nru swift-2.10.1/test/unit/cli/test_ringbuilder.py swift-2.10.2/test/unit/cli/test_ringbuilder.py
--- swift-2.10.1/test/unit/cli/test_ringbuilder.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/cli/test_ringbuilder.py	2017-05-26 21:40:54.000000000 +0200
@@ -25,6 +25,7 @@
 import uuid
 import shlex
 import shutil
+import time
 
 from swift.cli import ringbuilder
 from swift.cli.ringbuilder import EXIT_SUCCESS, EXIT_WARNING, EXIT_ERROR
@@ -1885,6 +1886,27 @@
             ring = RingBuilder.load(self.tmpfile)
             self.assertEqual(ring.min_part_seconds_left, 3600)
 
+    def test_time_remaining(self):
+        self.create_sample_ring()
+        now = time.time()
+        with mock.patch('swift.common.ring.builder.time', return_value=now):
+            self.run_srb('rebalance')
+            out, err = self.run_srb('rebalance')
+        self.assertIn('No partitions could be reassigned', out)
+        self.assertIn('must be at least min_part_hours', out)
+        self.assertIn('1 hours (1:00:00 remaining)', out)
+        the_future = now + 3600
+        with mock.patch('swift.common.ring.builder.time',
+                        return_value=the_future):
+            out, err = self.run_srb('rebalance')
+        self.assertIn('No partitions could be reassigned', out)
+        self.assertIn('There is no need to do so at this time', out)
+        # or you can pretend_min_part_hours_passed
+        self.run_srb('pretend_min_part_hours_passed')
+        out, err = self.run_srb('rebalance')
+        self.assertIn('No partitions could be reassigned', out)
+        self.assertIn('There is no need to do so at this time', out)
+
     def test_rebalance_failure_does_not_reset_last_moves_epoch(self):
         ring = RingBuilder(8, 3, 1)
         ring.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 1,
@@ -1921,6 +1943,40 @@
         argv = ["", self.tmpfile, "rebalance", "--seed", "2"]
         self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv)
 
+    def test_rebalance_removed_devices(self):
+        self.create_sample_ring()
+        argvs = [
+            ["", self.tmpfile, "rebalance", "3"],
+            ["", self.tmpfile, "remove", "d0"],
+            ["", self.tmpfile, "rebalance", "3"]]
+        for argv in argvs:
+            self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv)
+
+    def test_rebalance_min_part_hours_not_passed(self):
+        self.create_sample_ring()
+        argvs = [
+            ["", self.tmpfile, "rebalance", "3"],
+            ["", self.tmpfile, "set_weight", "d0", "1000"]]
+        for argv in argvs:
+            self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv)
+
+        ring = RingBuilder.load(self.tmpfile)
+        last_replica2part2dev = ring._replica2part2dev
+
+        mock_stdout = six.StringIO()
+        argv = ["", self.tmpfile, "rebalance", "3"]
+        with mock.patch("sys.stdout", mock_stdout):
+            self.assertSystemExit(EXIT_WARNING, ringbuilder.main, argv)
+        expected = "No partitions could be reassigned.\n" + \
+                   "The time between rebalances must be " + \
+                   "at least min_part_hours: 1 hours"
+        self.assertTrue(expected in mock_stdout.getvalue())
+
+        # Messages can be faked, so let's assure that the partition assignment
+        # did not change at all, despite the warning
+        ring = RingBuilder.load(self.tmpfile)
+        self.assertEqual(last_replica2part2dev, ring._replica2part2dev)
+
     def test_write_ring(self):
         self.create_sample_ring()
         argv = ["", self.tmpfile, "rebalance"]
Binary files /tmp/NEdZvanO5v/swift-2.10.1/test/unit/common/malformed_schema_example.db and /tmp/uAGiLww1Vq/swift-2.10.2/test/unit/common/malformed_schema_example.db differ
diff -Nru swift-2.10.1/test/unit/common/middleware/test_account_quotas.py swift-2.10.2/test/unit/common/middleware/test_account_quotas.py
--- swift-2.10.1/test/unit/common/middleware/test_account_quotas.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/middleware/test_account_quotas.py	2017-05-26 21:40:54.000000000 +0200
@@ -13,8 +13,7 @@
 
 import unittest
 
-from swift.common.swob import Request, wsgify, HTTPForbidden, \
-    HTTPException
+from swift.common.swob import Request, wsgify, HTTPForbidden
 
 from swift.common.middleware import account_quotas, copy
 
@@ -491,9 +490,8 @@
                             environ={'REQUEST_METHOD': 'PUT',
                                      'swift.cache': cache},
                             headers={'x-copy-from': 'bad_path'})
-        with self.assertRaises(HTTPException) as catcher:
-            req.get_response(self.copy_filter)
-        self.assertEqual(412, catcher.exception.status_int)
+        res = req.get_response(self.copy_filter)
+        self.assertEqual(res.status_int, 412)
 
 if __name__ == '__main__':
     unittest.main()
diff -Nru swift-2.10.1/test/unit/common/middleware/test_copy.py swift-2.10.2/test/unit/common/middleware/test_copy.py
--- swift-2.10.1/test/unit/common/middleware/test_copy.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/middleware/test_copy.py	2017-05-26 21:40:54.000000000 +0200
@@ -150,13 +150,15 @@
         self.assertEqual(len(self.authorized), 1)
         self.assertRequestEqual(req, self.authorized[0])
 
-    def test_object_delete_pass_through(self):
-        self.app.register('DELETE', '/v1/a/c/o', swob.HTTPOk, {})
-        req = Request.blank('/v1/a/c/o', method='DELETE')
-        status, headers, body = self.call_ssc(req)
-        self.assertEqual(status, '200 OK')
-        self.assertEqual(len(self.authorized), 1)
-        self.assertRequestEqual(req, self.authorized[0])
+    def test_object_pass_through_methods(self):
+        for method in ['DELETE', 'GET', 'HEAD', 'REPLICATE']:
+            self.app.register(method, '/v1/a/c/o', swob.HTTPOk, {})
+            req = Request.blank('/v1/a/c/o', method=method)
+            status, headers, body = self.call_ssc(req)
+            self.assertEqual(status, '200 OK')
+            self.assertEqual(len(self.authorized), 1)
+            self.assertRequestEqual(req, self.authorized[0])
+            self.assertNotIn('swift.orig_req_method', req.environ)
 
     def test_POST_as_COPY_simple(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
@@ -166,6 +168,8 @@
         self.assertEqual(status, '202 Accepted')
         self.assertEqual(len(self.authorized), 1)
         self.assertRequestEqual(req, self.authorized[0])
+        # For basic test cases, assert orig_req_method behavior
+        self.assertEqual(req.environ['swift.orig_req_method'], 'POST')
 
     def test_POST_as_COPY_201_return_202(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
@@ -256,6 +260,8 @@
         self.assertEqual('/v1/a/c/o', self.authorized[0].path)
         self.assertEqual('PUT', self.authorized[1].method)
         self.assertEqual('/v1/a/c/o2', self.authorized[1].path)
+        # For basic test cases, assert orig_req_method behavior
+        self.assertNotIn('swift.orig_req_method', req.environ)
 
     def test_static_large_object_manifest(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
@@ -521,24 +527,16 @@
         req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'},
                             headers={'Content-Length': '0',
                                      'X-Copy-From': '/c'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_copy_with_no_object_in_x_copy_from_and_account(self):
         req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'},
                             headers={'Content-Length': '0',
                                      'X-Copy-From': '/c',
                                      'X-Copy-From-Account': 'a'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_copy_with_bad_x_copy_from_account(self):
         req = Request.blank('/v1/a/c/o',
@@ -546,12 +544,8 @@
                             headers={'Content-Length': '0',
                                      'X-Copy-From': '/c/o',
                                      'X-Copy-From-Account': '/i/am/bad'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_copy_server_error_reading_source(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPServiceUnavailable, {})
@@ -673,6 +667,8 @@
             ('PUT', '/v1/a/c/o-copy')])
         self.assertIn('etag', self.app.headers[1])
         self.assertEqual(self.app.headers[1]['etag'], 'is sent')
+        # For basic test cases, assert orig_req_method behavior
+        self.assertEqual(req.environ['swift.orig_req_method'], 'COPY')
 
     def test_basic_DLO(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {
@@ -992,36 +988,27 @@
         req = Request.blank('/v1/a/c/o',
                             environ={'REQUEST_METHOD': 'COPY'},
                             headers={'Destination': 'c_o'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_COPY_account_no_object_in_destination(self):
         req = Request.blank('/v1/a/c/o',
                             environ={'REQUEST_METHOD': 'COPY'},
                             headers={'Destination': 'c_o',
                                      'Destination-Account': 'a1'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_COPY_account_bad_destination_account(self):
         req = Request.blank('/v1/a/c/o',
                             environ={'REQUEST_METHOD': 'COPY'},
                             headers={'Destination': '/c/o',
                                      'Destination-Account': '/i/am/bad'})
-        try:
-            status, headers, body = self.call_ssc(req)
-        except HTTPException as resp:
-            self.assertEqual("412 Precondition Failed", str(resp))
-        else:
-            self.fail("Expecting HTTPException.")
+        status, headers, body = self.call_ssc(req)
+
+        self.assertEqual(status, '412 Precondition Failed')
 
     def test_COPY_server_error_reading_source(self):
         self.app.register('GET', '/v1/a/c/o', swob.HTTPServiceUnavailable, {})
@@ -1210,6 +1197,8 @@
         self.assertEqual(len(self.authorized), 1)
         self.assertEqual('OPTIONS', self.authorized[0].method)
         self.assertEqual('/v1/a/c/o', self.authorized[0].path)
+        # For basic test cases, assert orig_req_method behavior
+        self.assertNotIn('swift.orig_req_method', req.environ)
 
     def test_COPY_in_OPTIONS_response_CORS(self):
         self.app.register('OPTIONS', '/v1/a/c/o', swob.HTTPOk,
diff -Nru swift-2.10.1/test/unit/common/middleware/test_quotas.py swift-2.10.2/test/unit/common/middleware/test_quotas.py
--- swift-2.10.1/test/unit/common/middleware/test_quotas.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/middleware/test_quotas.py	2017-05-26 21:40:54.000000000 +0200
@@ -15,7 +15,7 @@
 
 import unittest
 
-from swift.common.swob import Request, HTTPUnauthorized, HTTPOk, HTTPException
+from swift.common.swob import Request, HTTPUnauthorized, HTTPOk
 from swift.common.middleware import container_quotas, copy
 from test.unit.common.middleware.helpers import FakeSwift
 
@@ -315,9 +315,8 @@
                             environ={'REQUEST_METHOD': 'PUT',
                                      'swift.cache': cache},
                             headers={'x-copy-from': 'bad_path'})
-        with self.assertRaises(HTTPException) as catcher:
-            req.get_response(self.copy_filter)
-        self.assertEqual(412, catcher.exception.status_int)
+        res = req.get_response(self.copy_filter)
+        self.assertEqual(res.status_int, 412)
 
     def test_exceed_counts_quota_copy_from(self):
         self.app.register('GET', '/v1/a/c2/o2', HTTPOk,
diff -Nru swift-2.10.1/test/unit/common/middleware/test_subrequest_logging.py swift-2.10.2/test/unit/common/middleware/test_subrequest_logging.py
--- swift-2.10.1/test/unit/common/middleware/test_subrequest_logging.py	1970-01-01 01:00:00.000000000 +0100
+++ swift-2.10.2/test/unit/common/middleware/test_subrequest_logging.py	2017-05-26 21:40:54.000000000 +0200
@@ -0,0 +1,178 @@
+# Copyright (c) 2016-2017 OpenStack Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+# implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import unittest
+
+from swift.common.middleware import copy, proxy_logging
+from swift.common.swob import Request, HTTPOk
+from swift.common.utils import close_if_possible
+from swift.common.wsgi import make_subrequest
+from test.unit import FakeLogger
+from test.unit.common.middleware.helpers import FakeSwift
+
+
+SUB_GET_PATH = '/v1/a/c/sub_get'
+SUB_PUT_POST_PATH = '/v1/a/c/sub_put'
+
+
+class FakeFilter(object):
+    def __init__(self, app, conf, register):
+        self.body = ['FAKE MIDDLEWARE']
+        self.conf = conf
+        self.app = app
+        self.register = register
+        self.logger = None
+
+    def __call__(self, env, start_response):
+        path = SUB_PUT_POST_PATH
+        if env['REQUEST_METHOD'] is 'GET':
+            path = SUB_GET_PATH
+
+        # Make a subrequest that will be logged
+        hdrs = {'content-type': 'text/plain'}
+        sub_req = make_subrequest(env, path=path,
+                                  method=self.conf['subrequest_type'],
+                                  headers=hdrs,
+                                  agent='FakeApp',
+                                  swift_source='FA')
+        self.register(self.conf['subrequest_type'],
+                      path, HTTPOk, headers=hdrs)
+
+        resp = sub_req.get_response(self.app)
+        close_if_possible(resp.app_iter)
+
+        def _start_response(status, headers, exc_info=None):
+            return start_response(status, headers, exc_info)
+
+        return self.app(env, start_response)
+
+
+class FakeApp(object):
+    def __init__(self, conf):
+        self.fake_logger = FakeLogger()
+        self.fake_swift = self.app = FakeSwift()
+        self.register = self.fake_swift.register
+        for filter in reversed([
+                proxy_logging.filter_factory,
+                copy.filter_factory,
+                lambda conf: lambda app: FakeFilter(app, conf, self.register),
+                proxy_logging.filter_factory]):
+            self.app = filter(conf)(self.app)
+            self.app.logger = self.fake_logger
+            if hasattr(self.app, 'access_logger'):
+                self.app.access_logger = self.fake_logger
+
+        if conf['subrequest_type'] == 'GET':
+            self.register(conf['subrequest_type'], SUB_GET_PATH, HTTPOk, {})
+        else:
+            self.register(conf['subrequest_type'],
+                          SUB_PUT_POST_PATH, HTTPOk, {})
+
+    @property
+    def __call__(self):
+        return self.app.__call__
+
+
+class TestSubRequestLogging(unittest.TestCase):
+    path = '/v1/a/c/o'
+
+    def _test_subrequest_logged(self, subrequest_type):
+        # Test that subrequests made downstream from Copy PUT will be logged
+        # with the request type of the subrequest as opposed to the GET/PUT.
+
+        app = FakeApp({'subrequest_type': subrequest_type})
+
+        hdrs = {'content-type': 'text/plain', 'X-Copy-From': 'test/obj'}
+        req = Request.blank(self.path, method='PUT', headers=hdrs)
+
+        app.register('PUT', self.path, HTTPOk, headers=hdrs)
+        app.register('GET', '/v1/a/test/obj', HTTPOk, headers=hdrs)
+
+        req.get_response(app)
+        info_log_lines = app.fake_logger.get_lines_for_level('info')
+        self.assertEqual(len(info_log_lines), 5)
+        self.assertTrue(info_log_lines[0].startswith('Copying object'))
+        subreq_get = '%s %s' % (subrequest_type, SUB_GET_PATH)
+        subreq_put = '%s %s' % (subrequest_type, SUB_PUT_POST_PATH)
+        origput = 'PUT %s' % self.path
+        copyget = 'GET %s' % '/v1/a/test/obj'
+        # expect GET subreq, copy GET, PUT subreq, orig PUT
+        self.assertTrue(subreq_get in info_log_lines[1])
+        self.assertTrue(copyget in info_log_lines[2])
+        self.assertTrue(subreq_put in info_log_lines[3])
+        self.assertTrue(origput in info_log_lines[4])
+
+    def test_subrequest_logged_x_copy_from(self):
+        self._test_subrequest_logged('HEAD')
+        self._test_subrequest_logged('GET')
+        self._test_subrequest_logged('POST')
+        self._test_subrequest_logged('PUT')
+        self._test_subrequest_logged('DELETE')
+
+    def _test_subrequest_logged_POST(self, subrequest_type,
+                                     post_as_copy=False):
+        # Test that subrequests made downstream from Copy POST will be logged
+        # with the request type of the subrequest as opposed to the GET/PUT.
+
+        app = FakeApp({'subrequest_type': subrequest_type,
+                       'object_post_as_copy': post_as_copy})
+
+        hdrs = {'content-type': 'text/plain'}
+        req = Request.blank(self.path, method='POST', headers=hdrs)
+
+        app.register('POST', self.path, HTTPOk, headers=hdrs)
+        expect_lines = 2
+        if post_as_copy:
+            app.register('PUT', self.path, HTTPOk, headers=hdrs)
+            app.register('GET', '/v1/a/c/o', HTTPOk, headers=hdrs)
+            expect_lines = 4
+
+        req.get_response(app)
+        info_log_lines = app.fake_logger.get_lines_for_level('info')
+        self.assertEqual(len(info_log_lines), expect_lines)
+        self.assertTrue('Copying object' not in info_log_lines[0])
+
+        subreq_put_post = '%s %s' % (subrequest_type, SUB_PUT_POST_PATH)
+        origpost = 'POST %s' % self.path
+        copyget = 'GET %s' % self.path
+
+        if post_as_copy:
+            # post_as_copy expect GET subreq, copy GET, PUT subreq, orig POST
+            subreq_get = '%s %s' % (subrequest_type, SUB_GET_PATH)
+            self.assertTrue(subreq_get in info_log_lines[0])
+            self.assertTrue(copyget in info_log_lines[1])
+            self.assertTrue(subreq_put_post in info_log_lines[2])
+            self.assertTrue(origpost in info_log_lines[3])
+        else:
+            # fast post expect POST subreq, original POST
+            self.assertTrue(subreq_put_post in info_log_lines[0])
+            self.assertTrue(origpost in info_log_lines[1])
+
+    def test_subrequest_logged_post_as_copy_with_POST_fast_post(self):
+        self._test_subrequest_logged_POST('HEAD', post_as_copy=False)
+        self._test_subrequest_logged_POST('GET', post_as_copy=False)
+        self._test_subrequest_logged_POST('POST', post_as_copy=False)
+        self._test_subrequest_logged_POST('PUT', post_as_copy=False)
+        self._test_subrequest_logged_POST('DELETE', post_as_copy=False)
+
+    def test_subrequest_logged_post_as_copy_with_POST(self):
+        self._test_subrequest_logged_POST('HEAD', post_as_copy=True)
+        self._test_subrequest_logged_POST('GET', post_as_copy=True)
+        self._test_subrequest_logged_POST('POST', post_as_copy=True)
+        self._test_subrequest_logged_POST('PUT', post_as_copy=True)
+        self._test_subrequest_logged_POST('DELETE', post_as_copy=True)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff -Nru swift-2.10.1/test/unit/common/ring/test_builder.py swift-2.10.2/test/unit/common/ring/test_builder.py
--- swift-2.10.1/test/unit/common/ring/test_builder.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/ring/test_builder.py	2017-05-26 21:40:54.000000000 +0200
@@ -723,7 +723,7 @@
                     "Partition %d not in zones 0 and 1 (got %r)" %
                     (part, zones))
 
-    def test_min_part_hours_zero_will_move_whatever_it_takes(self):
+    def test_min_part_hours_zero_will_move_one_replica(self):
         rb = ring.RingBuilder(8, 3, 0)
         # there'll be at least one replica in z0 and z1
         rb.add_dev({'id': 0, 'region': 0, 'zone': 0, 'weight': 0.5,
@@ -747,6 +747,33 @@
         rb.validate()
 
         self.assertEqual(0, rb.dispersion)
+        # Only one replica could move, so some zones are quite unbalanced
+        self.assertAlmostEqual(rb.get_balance(), 66.66, delta=0.5)
+
+        # There was only zone 0 and 1 before adding more devices. Only one
+        # replica should have been moved, therefore we expect 256 parts in zone
+        # 0 and 1, and a total of 256 in zone 2,3, and 4
+        expected = defaultdict(int, {0: 256, 1: 256, 2: 86, 3: 85, 4: 85})
+        self.assertEqual(expected, self._partition_counts(rb, key='zone'))
+
+        parts_with_moved_count = defaultdict(int)
+        for part in range(rb.parts):
+            zones = set()
+            for replica in range(rb.replicas):
+                zones.add(rb.devs[rb._replica2part2dev[replica][part]]['zone'])
+            moved_replicas = len(zones - {0, 1})
+            parts_with_moved_count[moved_replicas] += 1
+
+        # We expect that every partition moved exactly one replica
+        expected = {1: 256}
+        self.assertEqual(parts_with_moved_count, expected)
+
+        # After rebalancing two more times, we expect that everything is in a
+        # good state
+        rb.rebalance(seed=3)
+        rb.rebalance(seed=3)
+
+        self.assertEqual(0, rb.dispersion)
         # a balance of w/i a 1% isn't too bad for 3 replicas on 7
         # devices when part power is only 8
         self.assertAlmostEqual(rb.get_balance(), 0, delta=0.5)
diff -Nru swift-2.10.1/test/unit/common/ring/test_utils.py swift-2.10.2/test/unit/common/ring/test_utils.py
--- swift-2.10.1/test/unit/common/ring/test_utils.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/ring/test_utils.py	2017-05-26 21:40:54.000000000 +0200
@@ -664,11 +664,18 @@
                     'ip': '127.0.0.3', 'port': 10003, 'device': 'sdd1'})
 
         # when the biggest tier has the smallest devices things get ugly
+        # can't move all the part-replicas in one rebalance
         rb.rebalance(seed=100)
         report = dispersion_report(rb, verbose=True)
-        self.assertEqual(rb.dispersion, 70.3125)
+        self.assertEqual(rb.dispersion, 9.375)
+        self.assertEqual(report['worst_tier'], 'r1z1-127.0.0.1')
+        self.assertEqual(report['max_dispersion'], 7.18562874251497)
+        # do a sencond rebalance
+        rb.rebalance(seed=100)
+        report = dispersion_report(rb, verbose=True)
+        self.assertEqual(rb.dispersion, 50.0)
         self.assertEqual(report['worst_tier'], 'r1z0-127.0.0.3')
-        self.assertEqual(report['max_dispersion'], 88.23529411764706)
+        self.assertEqual(report['max_dispersion'], 50.0)
 
         # ... but overload can square it
         rb.set_overload(rb.get_required_overload())
diff -Nru swift-2.10.1/test/unit/common/test_bufferedhttp.py swift-2.10.2/test/unit/common/test_bufferedhttp.py
--- swift-2.10.1/test/unit/common/test_bufferedhttp.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_bufferedhttp.py	2017-05-26 21:40:54.000000000 +0200
@@ -19,10 +19,12 @@
 
 import socket
 
-from eventlet import spawn, Timeout, listen
+from eventlet import spawn, Timeout
 
 from swift.common import bufferedhttp
 
+from test import listen_zero
+
 
 class MockHTTPSConnection(object):
 
@@ -45,7 +47,7 @@
 class TestBufferedHTTP(unittest.TestCase):
 
     def test_http_connect(self):
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def accept(expected_par):
             try:
diff -Nru swift-2.10.1/test/unit/common/test_db.py swift-2.10.2/test/unit/common/test_db.py
--- swift-2.10.1/test/unit/common/test_db.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_db.py	2017-05-26 21:40:54.000000000 +0200
@@ -758,6 +758,22 @@
                 str(exc),
                 'Quarantined %s to %s due to malformed database' %
                 (dbpath, qpath))
+            # Test malformed schema database
+            copy(os.path.join(os.path.dirname(__file__),
+                              'malformed_schema_example.db'),
+                 os.path.join(dbpath, '1.db'))
+            broker = DatabaseBroker(os.path.join(dbpath, '1.db'))
+            broker.db_type = 'test'
+            exc = None
+            try:
+                with broker.get() as conn:
+                    conn.execute('SELECT * FROM test')
+            except Exception as err:
+                exc = err
+            self.assertEqual(
+                str(exc),
+                'Quarantined %s to %s due to malformed database' %
+                (dbpath, qpath))
             # Test corrupted database
             copy(os.path.join(os.path.dirname(__file__),
                               'corrupted_example.db'),
@@ -1214,29 +1230,36 @@
             message = str(e)
         self.assertEqual(message, '400 Bad Request')
 
-    def test_possibly_quarantine_disk_error(self):
+    def test_possibly_quarantine_db_errors(self):
         dbpath = os.path.join(self.testdir, 'dev', 'dbs', 'par', 'pre', 'db')
-        mkdirs(dbpath)
         qpath = os.path.join(self.testdir, 'dev', 'quarantined', 'tests', 'db')
-        broker = DatabaseBroker(os.path.join(dbpath, '1.db'))
-        broker.db_type = 'test'
-
-        def stub():
-            raise sqlite3.OperationalError('disk I/O error')
-
-        try:
-            stub()
-        except Exception:
+        # Data is a list of Excpetions to be raised and expected values in the
+        # log
+        data = [
+            (sqlite3.DatabaseError('database disk image is malformed'),
+             'malformed'),
+            (sqlite3.DatabaseError('malformed database schema'), 'malformed'),
+            (sqlite3.DatabaseError('file is encrypted or is not a database'),
+             'corrupted'),
+            (sqlite3.OperationalError('disk I/O error'),
+             'disk error while accessing')]
+
+        for i, (ex, hint) in enumerate(data):
+            mkdirs(dbpath)
+            broker = DatabaseBroker(os.path.join(dbpath, '%d.db' % (i)))
+            broker.db_type = 'test'
             try:
-                broker.possibly_quarantine(*sys.exc_info())
-            except Exception as exc:
-                self.assertEqual(
-                    str(exc),
-                    'Quarantined %s to %s due to disk error '
-                    'while accessing database' %
-                    (dbpath, qpath))
-            else:
-                self.fail('Expected an exception to be raised')
+                raise ex
+            except (sqlite3.DatabaseError, DatabaseConnectionError):
+                try:
+                    broker.possibly_quarantine(*sys.exc_info())
+                except Exception as exc:
+                    self.assertEqual(
+                        str(exc),
+                        'Quarantined %s to %s due to %s database' %
+                        (dbpath, qpath, hint))
+                else:
+                    self.fail('Expected an exception to be raised')
 
 if __name__ == '__main__':
     unittest.main()
diff -Nru swift-2.10.1/test/unit/common/test_db_replicator.py swift-2.10.2/test/unit/common/test_db_replicator.py
--- swift-2.10.1/test/unit/common/test_db_replicator.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_db_replicator.py	2017-05-26 21:40:54.000000000 +0200
@@ -642,6 +642,50 @@
         replicator._replicate_object('0', '/path/to/file', 'node_id')
         self.assertEqual(['/path/to/file'], self.delete_db_calls)
 
+    def test_replicate_object_with_exception(self):
+        replicator = TestReplicator({})
+        replicator.ring = FakeRingWithNodes().Ring('path')
+        replicator.brokerclass = FakeAccountBroker
+        replicator.delete_db = self.stub_delete_db
+        replicator._repl_to_node = mock.Mock(side_effect=Exception())
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[0]['id'])
+        self.assertEqual(2, replicator._repl_to_node.call_count)
+        # with one DriveNotMounted exception called on +1 more replica
+        replicator._repl_to_node = mock.Mock(side_effect=[DriveNotMounted()])
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[0]['id'])
+        self.assertEqual(3, replicator._repl_to_node.call_count)
+        # called on +1 more replica and self when *first* handoff
+        replicator._repl_to_node = mock.Mock(side_effect=[DriveNotMounted()])
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[3]['id'])
+        self.assertEqual(4, replicator._repl_to_node.call_count)
+        # even if it's the last handoff it works to keep 3 replicas
+        # 2 primaries + 1 handoff
+        replicator._repl_to_node = mock.Mock(side_effect=[DriveNotMounted()])
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[-1]['id'])
+        self.assertEqual(4, replicator._repl_to_node.call_count)
+        # with two DriveNotMounted exceptions called on +2 more replica keeping
+        # durability
+        replicator._repl_to_node = mock.Mock(
+            side_effect=[DriveNotMounted()] * 2)
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[0]['id'])
+        self.assertEqual(4, replicator._repl_to_node.call_count)
+
+    def test_replicate_object_with_exception_run_out_of_nodes(self):
+        replicator = TestReplicator({})
+        replicator.ring = FakeRingWithNodes().Ring('path')
+        replicator.brokerclass = FakeAccountBroker
+        replicator.delete_db = self.stub_delete_db
+        # all other devices are not mounted
+        replicator._repl_to_node = mock.Mock(side_effect=DriveNotMounted())
+        replicator._replicate_object('0', '/path/to/file',
+                                     replicator.ring.devs[0]['id'])
+        self.assertEqual(5, replicator._repl_to_node.call_count)
+
     def test_replicate_account_out_of_place(self):
         replicator = TestReplicator({}, logger=unit.FakeLogger())
         replicator.ring = FakeRingWithNodes().Ring('path')
diff -Nru swift-2.10.1/test/unit/common/test_storage_policy.py swift-2.10.2/test/unit/common/test_storage_policy.py
--- swift-2.10.1/test/unit/common/test_storage_policy.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_storage_policy.py	2017-05-26 21:40:54.000000000 +0200
@@ -12,7 +12,9 @@
 # limitations under the License.
 
 """ Tests for swift.common.storage_policies """
+import contextlib
 import six
+import logging
 import unittest
 import os
 import mock
@@ -20,6 +22,7 @@
 from six.moves.configparser import ConfigParser
 from tempfile import NamedTemporaryFile
 from test.unit import patch_policies, FakeRing, temptree, DEFAULT_TEST_EC_TYPE
+import swift.common.storage_policy
 from swift.common.storage_policy import (
     StoragePolicyCollection, POLICIES, PolicyError, parse_storage_policies,
     reload_storage_policies, get_policy_string, split_policy_string,
@@ -30,6 +33,26 @@
 from pyeclib.ec_iface import ECDriver
 
 
+class CapturingHandler(logging.Handler):
+    def __init__(self):
+        super(CapturingHandler, self).__init__()
+        self._records = []
+
+    def emit(self, record):
+        self._records.append(record)
+
+
+@contextlib.contextmanager
+def capture_logging(log_name):
+    captured = CapturingHandler()
+    logger = logging.getLogger(log_name)
+    logger.addHandler(captured)
+    try:
+        yield captured._records
+    finally:
+        logger.removeHandler(captured)
+
+
 @BaseStoragePolicy.register('fake')
 class FakeStoragePolicy(BaseStoragePolicy):
     """
@@ -582,6 +605,91 @@
             PolicyError, "must specify a storage policy section "
             "for policy index 0", parse_storage_policies, bad_conf)
 
+    @mock.patch.object(swift.common.storage_policy, 'VALID_EC_TYPES',
+                       ['isa_l_rs_vand', 'isa_l_rs_cauchy'])
+    @mock.patch('swift.common.storage_policy.ECDriver')
+    def test_known_bad_ec_config(self, mock_driver):
+        good_conf = self._conf("""
+        [storage-policy:0]
+        name = bad-policy
+        policy_type = erasure_coding
+        ec_type = isa_l_rs_cauchy
+        ec_num_data_fragments = 10
+        ec_num_parity_fragments = 5
+        """)
+
+        with capture_logging('swift.common.storage_policy') as records:
+            parse_storage_policies(good_conf)
+        mock_driver.assert_called_once()
+        mock_driver.reset_mock()
+        self.assertFalse([(r.levelname, r.message) for r in records])
+
+        good_conf = self._conf("""
+        [storage-policy:0]
+        name = bad-policy
+        policy_type = erasure_coding
+        ec_type = isa_l_rs_vand
+        ec_num_data_fragments = 10
+        ec_num_parity_fragments = 4
+        """)
+
+        with capture_logging('swift.common.storage_policy') as records:
+            parse_storage_policies(good_conf)
+        mock_driver.assert_called_once()
+        mock_driver.reset_mock()
+        self.assertFalse([(r.levelname, r.message) for r in records])
+
+        bad_conf = self._conf("""
+        [storage-policy:0]
+        name = bad-policy
+        policy_type = erasure_coding
+        ec_type = isa_l_rs_vand
+        ec_num_data_fragments = 10
+        ec_num_parity_fragments = 5
+        """)
+
+        with capture_logging('swift.common.storage_policy') as records:
+            parse_storage_policies(bad_conf)
+        mock_driver.assert_called_once()
+        mock_driver.reset_mock()
+        self.assertEqual([r.levelname for r in records],
+                         ['WARNING', 'WARNING'])
+        for msg in ('known to harm data durability',
+                    'Any data in this policy should be migrated',
+                    'https://bugs.launchpad.net/swift/+bug/1639691'):
+            self.assertIn(msg, records[0].message)
+        self.assertIn('In a future release, this will prevent services from '
+                      'starting', records[1].message)
+
+        slightly_less_bad_conf = self._conf("""
+        [storage-policy:0]
+        name = bad-policy
+        policy_type = erasure_coding
+        ec_type = isa_l_rs_vand
+        ec_num_data_fragments = 10
+        ec_num_parity_fragments = 5
+        deprecated = true
+
+        [storage-policy:1]
+        name = good-policy
+        policy_type = erasure_coding
+        ec_type = isa_l_rs_cauchy
+        ec_num_data_fragments = 10
+        ec_num_parity_fragments = 5
+        default = true
+        """)
+
+        with capture_logging('swift.common.storage_policy') as records:
+            parse_storage_policies(slightly_less_bad_conf)
+        self.assertEqual(2, mock_driver.call_count)
+        mock_driver.reset_mock()
+        self.assertEqual([r.levelname for r in records],
+                         ['WARNING'])
+        for msg in ('known to harm data durability',
+                    'Any data in this policy should be migrated',
+                    'https://bugs.launchpad.net/swift/+bug/1639691'):
+            self.assertIn(msg, records[0].message)
+
     def test_no_default(self):
         orig_conf = self._conf("""
         [storage-policy:0]
diff -Nru swift-2.10.1/test/unit/common/test_utils.py swift-2.10.2/test/unit/common/test_utils.py
--- swift-2.10.1/test/unit/common/test_utils.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_utils.py	2017-05-26 21:40:54.000000000 +0200
@@ -52,6 +52,7 @@
 from tempfile import TemporaryFile, NamedTemporaryFile, mkdtemp
 from netifaces import AF_INET6
 from mock import MagicMock, patch
+from nose import SkipTest
 from six.moves.configparser import NoSectionError, NoOptionError
 from uuid import uuid4
 
@@ -3558,6 +3559,12 @@
         def _fake_syscall(*args):
             called['syscall'] = args
 
+        # Test if current architecture supports changing of priority
+        try:
+            utils.NR_ioprio_set()
+        except OSError as e:
+            raise SkipTest(e)
+
         with patch('swift.common.utils._libc_setpriority',
                    _fake_setpriority), \
                 patch('swift.common.utils._posix_syscall', _fake_syscall):
diff -Nru swift-2.10.1/test/unit/common/test_wsgi.py swift-2.10.2/test/unit/common/test_wsgi.py
--- swift-2.10.1/test/unit/common/test_wsgi.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/common/test_wsgi.py	2017-05-26 21:40:54.000000000 +0200
@@ -23,7 +23,6 @@
 from textwrap import dedent
 from collections import defaultdict
 
-from eventlet import listen
 import six
 from six import BytesIO
 from six import StringIO
@@ -45,6 +44,7 @@
 from swift.common import wsgi, utils
 from swift.common.storage_policy import POLICIES
 
+from test import listen_zero
 from test.unit import (
     temptree, with_tempdir, write_fake_ring, patch_policies, FakeLogger)
 
@@ -384,7 +384,7 @@
                         with mock.patch('swift.common.wsgi.inspect'):
                             conf = wsgi.appconfig(conf_file)
                             logger = logging.getLogger('test')
-                            sock = listen(('localhost', 0))
+                            sock = listen_zero()
                             wsgi.run_server(conf, logger, sock)
         self.assertEqual('HTTP/1.0',
                          _wsgi.HttpProtocol.default_request_version)
@@ -434,7 +434,7 @@
                                getargspec=argspec_stub):
                 conf = wsgi.appconfig(conf_file)
                 logger = logging.getLogger('test')
-                sock = listen(('localhost', 0))
+                sock = listen_zero()
                 wsgi.run_server(conf, logger, sock)
 
         self.assertTrue(_wsgi.server.called)
@@ -471,7 +471,7 @@
                             with mock.patch('swift.common.wsgi.inspect'):
                                 conf = wsgi.appconfig(conf_dir)
                                 logger = logging.getLogger('test')
-                                sock = listen(('localhost', 0))
+                                sock = listen_zero()
                                 wsgi.run_server(conf, logger, sock)
                                 self.assertTrue(os.environ['TZ'] is not '')
 
@@ -526,7 +526,7 @@
                     with mock.patch('swift.common.wsgi.eventlet') as _eventlet:
                         conf = wsgi.appconfig(conf_file)
                         logger = logging.getLogger('test')
-                        sock = listen(('localhost', 0))
+                        sock = listen_zero()
                         wsgi.run_server(conf, logger, sock)
         self.assertEqual('HTTP/1.0',
                          _wsgi.HttpProtocol.default_request_version)
diff -Nru swift-2.10.1/test/unit/container/test_server.py swift-2.10.2/test/unit/container/test_server.py
--- swift-2.10.1/test/unit/container/test_server.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/container/test_server.py	2017-05-26 21:40:54.000000000 +0200
@@ -27,7 +27,7 @@
 import time
 import random
 
-from eventlet import spawn, Timeout, listen
+from eventlet import spawn, Timeout
 import json
 import six
 from six import BytesIO
@@ -45,6 +45,7 @@
 from swift.common.storage_policy import (POLICIES, StoragePolicy)
 from swift.common.request_helpers import get_sys_meta_prefix
 
+from test import listen_zero
 from test.unit import patch_policies
 
 
@@ -1023,7 +1024,7 @@
         self.assertEqual(resp.status_int, 400)
 
     def test_account_update_account_override_deleted(self):
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
         req = Request.blank(
             '/sda1/p/a/c',
             environ={'REQUEST_METHOD': 'PUT',
@@ -1041,7 +1042,7 @@
             self.assertEqual(resp.status_int, 201)
 
     def test_PUT_account_update(self):
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def accept(return_code, expected_timestamp):
             try:
@@ -1901,7 +1902,7 @@
             self.assertEqual(obj['last_modified'], t9.isoformat)
 
     def test_DELETE_account_update(self):
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def accept(return_code, expected_timestamp):
             try:
diff -Nru swift-2.10.1/test/unit/container/test_updater.py swift-2.10.2/test/unit/container/test_updater.py
--- swift-2.10.1/test/unit/container/test_updater.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/container/test_updater.py	2017-05-26 21:40:54.000000000 +0200
@@ -23,7 +23,7 @@
 from tempfile import mkdtemp
 from test.unit import FakeLogger
 
-from eventlet import spawn, Timeout, listen
+from eventlet import spawn, Timeout
 
 from swift.common import utils
 from swift.container import updater as container_updater
@@ -31,6 +31,8 @@
 from swift.common.ring import RingData
 from swift.common.utils import normalize_timestamp
 
+from test import listen_zero
+
 
 class TestContainerUpdater(unittest.TestCase):
 
@@ -180,7 +182,7 @@
                 traceback.print_exc()
                 return err
             return None
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def spawn_accepts():
             events = []
@@ -275,7 +277,7 @@
                 return err
             return None
 
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def spawn_accepts():
             events = []
diff -Nru swift-2.10.1/test/unit/helpers.py swift-2.10.2/test/unit/helpers.py
--- swift-2.10.1/test/unit/helpers.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/helpers.py	2017-05-26 21:40:54.000000000 +0200
@@ -27,7 +27,7 @@
 import time
 
 
-from eventlet import listen, spawn, wsgi
+from eventlet import spawn, wsgi
 import mock
 from shutil import rmtree
 import six.moves.cPickle as pickle
@@ -45,6 +45,7 @@
 from swift.proxy import server as proxy_server
 import swift.proxy.controllers.obj
 
+from test import listen_zero
 from test.unit import write_fake_ring, DEFAULT_TEST_EC_TYPE, debug_logger, \
     connect_tcp, readuntil2crlfs
 
@@ -92,14 +93,14 @@
             'allow_versions': 't'}
     if extra_conf:
         conf.update(extra_conf)
-    prolis = listen(('localhost', 0))
-    acc1lis = listen(('localhost', 0))
-    acc2lis = listen(('localhost', 0))
-    con1lis = listen(('localhost', 0))
-    con2lis = listen(('localhost', 0))
-    obj1lis = listen(('localhost', 0))
-    obj2lis = listen(('localhost', 0))
-    obj3lis = listen(('localhost', 0))
+    prolis = listen_zero()
+    acc1lis = listen_zero()
+    acc2lis = listen_zero()
+    con1lis = listen_zero()
+    con2lis = listen_zero()
+    obj1lis = listen_zero()
+    obj2lis = listen_zero()
+    obj3lis = listen_zero()
     objsocks = [obj1lis, obj2lis, obj3lis]
     context["test_sockets"] = \
         (prolis, acc1lis, acc2lis, con1lis, con2lis, obj1lis, obj2lis, obj3lis)
diff -Nru swift-2.10.1/test/unit/obj/test_auditor.py swift-2.10.2/test/unit/obj/test_auditor.py
--- swift-2.10.1/test/unit/obj/test_auditor.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_auditor.py	2017-05-26 21:40:54.000000000 +0200
@@ -830,7 +830,9 @@
         # create tombstone and hashes.pkl file, ensuring the tombstone is not
         # reclaimed by mocking time to be the tombstone time
         with mock.patch('time.time', return_value=float(ts_tomb)):
+            # this delete will create a invalid hashes entry
             self.disk_file.delete(ts_tomb)
+            # this get_hashes call will truncate the invalid hashes entry
             self.disk_file.manager.get_hashes(
                 self.devices + '/sda', '0', [], self.disk_file.policy)
         suffix = basename(dirname(self.disk_file._datadir))
@@ -839,8 +841,10 @@
         self.assertEqual(['%s.ts' % ts_tomb.internal],
                          os.listdir(self.disk_file._datadir))
         self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
-        self.assertFalse(os.path.exists(
-            os.path.join(part_dir, HASH_INVALIDATIONS_FILE)))
+        hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
+        self.assertTrue(os.path.exists(hash_invalid))
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))
         # Run auditor
         self.auditor.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
         # sanity check - auditor should not remove tombstone file
@@ -853,8 +857,10 @@
         ts_tomb = Timestamp(time.time() - 55)
         part_dir, suffix = self._audit_tombstone(self.conf, ts_tomb)
         self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
-        self.assertFalse(os.path.exists(
-            os.path.join(part_dir, HASH_INVALIDATIONS_FILE)))
+        hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
+        self.assertTrue(os.path.exists(hash_invalid))
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))
 
     def test_reclaimable_tombstone(self):
         # audit with a reclaimable tombstone
@@ -874,8 +880,10 @@
         conf['reclaim_age'] = 2 * 604800
         part_dir, suffix = self._audit_tombstone(conf, ts_tomb)
         self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
-        self.assertFalse(os.path.exists(
-            os.path.join(part_dir, HASH_INVALIDATIONS_FILE)))
+        hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
+        self.assertTrue(os.path.exists(hash_invalid))
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))
 
     def test_reclaimable_tombstone_with_custom_reclaim_age(self):
         # audit with a tombstone older than custom reclaim age
@@ -897,8 +905,10 @@
         part_dir, suffix = self._audit_tombstone(
             self.conf, ts_tomb, zero_byte_fps=50)
         self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
-        self.assertFalse(os.path.exists(
-            os.path.join(part_dir, HASH_INVALIDATIONS_FILE)))
+        hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
+        self.assertTrue(os.path.exists(hash_invalid))
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))
 
     def _test_expired_object_is_ignored(self, zero_byte_fps):
         # verify that an expired object does not get mistaken for a tombstone
@@ -910,15 +920,41 @@
                        extra_metadata={'X-Delete-At': now - 10})
         files = os.listdir(self.disk_file._datadir)
         self.assertTrue([f for f in files if f.endswith('.data')])  # sanity
+        # diskfile write appends to invalid hashes file
+        part_dir = dirname(dirname(self.disk_file._datadir))
+        hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual(basename(dirname(self.disk_file._datadir)),
+                             fp.read().strip('\n'))  # sanity check
+
+        # run the auditor...
         with mock.patch.object(auditor, 'dump_recon_cache'):
             audit.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
+
+        # the auditor doesn't touch anything on the invalidation file
+        # (i.e. not truncate and add no entry)
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual(basename(dirname(self.disk_file._datadir)),
+                             fp.read().strip('\n'))  # sanity check
+
+        # this get_hashes call will truncate the invalid hashes entry
+        self.disk_file.manager.get_hashes(
+            self.devices + '/sda', '0', [], self.disk_file.policy)
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))  # sanity check
+
+        # run the auditor, again...
+        with mock.patch.object(auditor, 'dump_recon_cache'):
+            audit.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
+
+        # verify nothing changed
         self.assertTrue(os.path.exists(self.disk_file._datadir))
-        part_dir = dirname(dirname(self.disk_file._datadir))
-        self.assertFalse(os.path.exists(
-            os.path.join(part_dir, HASH_INVALIDATIONS_FILE)))
         self.assertEqual(files, os.listdir(self.disk_file._datadir))
         self.assertFalse(audit.logger.get_lines_for_level('error'))
         self.assertFalse(audit.logger.get_lines_for_level('warning'))
+        # and there was no hash invalidation
+        with open(hash_invalid, 'rb') as fp:
+            self.assertEqual('', fp.read().strip('\n'))
 
     def test_expired_object_is_ignored(self):
         self._test_expired_object_is_ignored(0)
diff -Nru swift-2.10.1/test/unit/obj/test_diskfile.py swift-2.10.2/test/unit/obj/test_diskfile.py
--- swift-2.10.1/test/unit/obj/test_diskfile.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_diskfile.py	2017-05-26 21:40:54.000000000 +0200
@@ -28,6 +28,7 @@
 import uuid
 import xattr
 import re
+import six
 from collections import defaultdict
 from random import shuffle, randint
 from shutil import rmtree
@@ -5188,6 +5189,22 @@
         filename += '.meta'
         return filename
 
+    def get_different_suffix_df(self, df, **kwargs):
+        # returns diskfile in the same partition with different suffix
+        suffix_dir = os.path.dirname(df._datadir)
+        for i in itertools.count():
+            df2 = df._manager.get_diskfile(
+                df._device_path,
+                df._datadir.split('/')[-3],
+                df._account,
+                df._container,
+                'o%d' % i,
+                policy=df.policy,
+                **kwargs)
+            suffix_dir2 = os.path.dirname(df2._datadir)
+            if suffix_dir != suffix_dir2:
+                return df2
+
     def check_cleanup_ondisk_files(self, policy, input_files, output_files):
         orig_unlink = os.unlink
         file_list = list(input_files)
@@ -5464,18 +5481,40 @@
             df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
                                      policy=policy)
             suffix_dir = os.path.dirname(df._datadir)
+            suffix = os.path.basename(suffix_dir)
             part_path = os.path.join(self.devices, 'sda1',
                                      diskfile.get_data_dir(policy), '0')
             hashes_file = os.path.join(part_path, diskfile.HASH_FILE)
             inv_file = os.path.join(
                 part_path, diskfile.HASH_INVALIDATIONS_FILE)
-            self.assertFalse(os.path.exists(hashes_file))  # sanity
-            with mock.patch('swift.obj.diskfile.lock_path') as mock_lock:
-                df_mgr.invalidate_hash(suffix_dir)
-            self.assertFalse(mock_lock.called)
-            # does not create files
+            # sanity, new partition has no suffix hashing artifacts
             self.assertFalse(os.path.exists(hashes_file))
             self.assertFalse(os.path.exists(inv_file))
+            # invalidating a hash does not create the hashes_file
+            with mock.patch(
+                    'swift.obj.diskfile.BaseDiskFileManager.invalidate_hash',
+                    side_effect=diskfile.invalidate_hash) \
+                    as mock_invalidate_hash:
+                df.delete(self.ts())
+            self.assertFalse(os.path.exists(hashes_file))
+            # ... but does invalidate the suffix
+            self.assertEqual([mock.call(suffix_dir)],
+                             mock_invalidate_hash.call_args_list)
+            with open(inv_file) as f:
+                self.assertEqual(suffix, f.read().strip('\n'))
+            # ... and hashing suffixes finds (and hashes) the new suffix
+            hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertIn(suffix, hashes)
+            self.assertTrue(os.path.exists(hashes_file))
+            self.assertIn(os.path.basename(suffix_dir), hashes)
+            with open(hashes_file) as f:
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertTrue(found_hashes.pop('valid'))
+                self.assertEqual(hashes, found_hashes)
+            # ... and truncates the invalidations file
+            with open(inv_file) as f:
+                self.assertEqual('', f.read().strip('\n'))
 
     def test_invalidate_hash_empty_file_exists(self):
         for policy in self.iter_policies():
@@ -5491,6 +5530,242 @@
             hashes = df_mgr.get_hashes('sda1', '0', [], policy)
             self.assertIn(suffix, hashes)  # sanity
 
+    def test_invalidate_hash_file_not_truncated_when_empty(self):
+        orig_open = open
+
+        def watch_open(*args, **kargs):
+            name = os.path.basename(args[0])
+            open_log[name].append(args[1])
+            return orig_open(*args, **kargs)
+
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            part_path = os.path.join(self.devices, 'sda1',
+                                     diskfile.get_data_dir(policy), '0')
+            inv_file = os.path.join(
+                part_path, diskfile.HASH_INVALIDATIONS_FILE)
+            hash_file = os.path.join(
+                part_path, diskfile.HASH_FILE)
+
+            hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertEqual(hashes, {})
+            self.assertTrue(os.path.exists(hash_file))
+            # create something to hash
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            df.delete(self.ts())
+            self.assertTrue(os.path.exists(inv_file))
+            # invalidation file created, lets consolidate it
+            df_mgr.get_hashes('sda1', '0', [], policy)
+
+            open_log = defaultdict(list)
+            open_loc = '__builtin__.open' if six.PY2 else 'builtins.open'
+            with mock.patch(open_loc, watch_open):
+                self.assertTrue(os.path.exists(inv_file))
+                # no new suffixes get invalided... so no write iop
+                df_mgr.get_hashes('sda1', '0', [], policy)
+            # each file is opened once to read
+            expected = {
+                'hashes.pkl': ['rb'],
+                'hashes.invalid': ['rb'],
+            }
+            self.assertEqual(open_log, expected)
+
+    def test_invalidates_hashes_of_new_partition(self):
+        # a suffix can be changed or created by second process when new pkl
+        # is calculated - that suffix must be correct on next get_hashes call
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            orig_listdir = os.listdir
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            suffix = os.path.basename(os.path.dirname(df._datadir))
+            df2 = self.get_different_suffix_df(df)
+            suffix2 = os.path.basename(os.path.dirname(df2._datadir))
+            non_local = {'df2touched': False}
+            df.delete(self.ts())
+
+            def mock_listdir(*args, **kwargs):
+                # simulating an invalidation occuring in another process while
+                # get_hashes is executing
+                result = orig_listdir(*args, **kwargs)
+                if not non_local['df2touched']:
+                    non_local['df2touched'] = True
+                    # other process creates new suffix
+                    df2.delete(self.ts())
+                return result
+
+            with mock.patch('swift.obj.diskfile.os.listdir',
+                            mock_listdir):
+                # creates pkl file
+                hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+
+            # second suffix added after directory listing, it's added later
+            self.assertIn(suffix, hashes)
+            self.assertNotIn(suffix2, hashes)
+            # updates pkl file
+            hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertIn(suffix, hashes)
+            self.assertIn(suffix2, hashes)
+
+    def test_hash_invalidations_survive_racing_get_hashes_diff_suffix(self):
+        # get_hashes must repeat path listing and return all hashes when
+        # another concurrent process created new pkl before hashes are stored
+        # by the first process
+        non_local = {}
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            # force hashes.pkl to exist; when it does not exist that's fine,
+            # it's just a different race; in that case the invalidation file
+            # gets appended, but we don't restart hashing suffixes (the
+            # invalidation get's squashed in and the suffix gets rehashed on
+            # the next REPLICATE call)
+            df_mgr.get_hashes('sda1', '0', [], policy)
+            orig_listdir = os.listdir
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            suffix = os.path.basename(os.path.dirname(df._datadir))
+            df2 = self.get_different_suffix_df(df)
+            suffix2 = os.path.basename(os.path.dirname(df2._datadir))
+            non_local['df2touched'] = False
+
+            df.delete(self.ts())
+
+            def mock_listdir(*args, **kwargs):
+                # simulating hashes.pkl modification by another process while
+                # get_hashes is executing
+                # df2 is created to check path hashes recalculation
+                result = orig_listdir(*args, **kwargs)
+                if not non_local['df2touched']:
+                    non_local['df2touched'] = True
+                    df2.delete(self.ts())
+                return result
+
+            with mock.patch('swift.obj.diskfile.os.listdir',
+                            mock_listdir):
+                # creates pkl file but leaves invalidation alone
+                hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+
+            # suffix2 just sits in the invalidations file
+            self.assertIn(suffix, hashes)
+            self.assertNotIn(suffix2, hashes)
+
+            # it'll show up next hash
+            hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertIn(suffix, hashes)
+            self.assertIn(suffix2, hashes)
+
+    def _check_hash_invalidations_race_get_hashes_same_suffix(self, existing):
+        # verify that when two processes concurrently call get_hashes, then any
+        # concurrent hash invalidation will survive and be consolidated on a
+        # subsequent call to get_hashes (i.e. ensure first get_hashes process
+        # does not ignore the concurrent hash invalidation that second
+        # get_hashes might have consolidated to hashes.pkl)
+        non_local = {}
+
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            orig_hash_suffix = df_mgr._hash_suffix
+            if existing:
+                # create hashes.pkl
+                df_mgr.get_hashes('sda1', '0', [], policy)
+
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            suffix_dir = os.path.dirname(df._datadir)
+            suffix = os.path.basename(suffix_dir)
+            part_dir = os.path.dirname(suffix_dir)
+            invalidations_file = os.path.join(
+                part_dir, diskfile.HASH_INVALIDATIONS_FILE)
+
+            non_local['hash'] = None
+            non_local['called'] = False
+
+            # delete will append suffix to hashes.invalid
+            df.delete(self.ts())
+            with open(invalidations_file) as f:
+                self.assertEqual(suffix, f.read().strip('\n'))  # sanity
+            hash1 = df_mgr._hash_suffix(suffix_dir, diskfile.ONE_WEEK)
+
+            def mock_hash_suffix(*args, **kwargs):
+                # after first get_hashes has called _hash_suffix, simulate a
+                # second process invalidating the same suffix, followed by a
+                # third process calling get_hashes and failing (or yielding)
+                # after consolidate_hashes has completed
+                result = orig_hash_suffix(*args, **kwargs)
+                if not non_local['called']:
+                    non_local['called'] = True
+                    # appends suffix to hashes.invalid
+                    df.delete(self.ts())
+                    # simulate another process calling get_hashes but failing
+                    # after hash invalidation have been consolidated
+                    hashes = df_mgr.consolidate_hashes(part_dir)
+                    if existing:
+                        self.assertTrue(hashes['valid'])
+                    else:
+                        self.assertFalse(hashes['valid'])
+                    # get the updated suffix hash...
+                    non_local['hash'] = orig_hash_suffix(suffix_dir,
+                                                         diskfile.ONE_WEEK)
+                return result
+
+            with mock.patch.object(df_mgr, '_hash_suffix', mock_hash_suffix):
+                # creates pkl file and repeats listing when pkl modified
+                hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+
+            # first get_hashes should complete with suffix1 state
+            self.assertIn(suffix, hashes)
+            # sanity check - the suffix hash has changed...
+            self.assertNotEqual(hash1, non_local['hash'])
+            # the invalidation file has been truncated...
+            with open(invalidations_file, 'r') as f:
+                self.assertEqual('', f.read())
+            # so hashes should have the latest suffix hash...
+            self.assertEqual(hashes[suffix], non_local['hash'])
+
+    def test_hash_invalidations_race_get_hashes_same_suffix_new(self):
+        self._check_hash_invalidations_race_get_hashes_same_suffix(False)
+
+    def test_hash_invalidations_race_get_hashes_same_suffix_existing(self):
+        self._check_hash_invalidations_race_get_hashes_same_suffix(True)
+
+    def _check_unpickle_error_and_get_hashes_failure(self, existing):
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            suffix = os.path.basename(os.path.dirname(df._datadir))
+            if existing:
+                df.delete(self.ts())
+                hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            df.delete(self.ts())
+            part_path = os.path.join(self.devices, 'sda1',
+                                     diskfile.get_data_dir(policy), '0')
+            hashes_file = os.path.join(part_path, diskfile.HASH_FILE)
+            # write a corrupt hashes.pkl
+            open(hashes_file, 'w')
+            # simulate first call to get_hashes failing after attempting to
+            # consolidate hashes
+            with mock.patch('swift.obj.diskfile.os.listdir',
+                            side_effect=Exception()):
+                self.assertRaises(
+                    Exception, df_mgr.get_hashes, 'sda1', '0', [], policy)
+            # sanity on-disk state is invalid
+            with open(hashes_file) as f:
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertEqual(False, found_hashes.pop('valid'))
+            # verify subsequent call to get_hashes reaches correct outcome
+            hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertIn(suffix, hashes)
+            self.assertEqual([], df_mgr.logger.get_lines_for_level('warning'))
+
+    def test_unpickle_error_and_get_hashes_failure_new_part(self):
+        self._check_unpickle_error_and_get_hashes_failure(False)
+
+    def test_unpickle_error_and_get_hashes_failure_existing_part(self):
+        self._check_unpickle_error_and_get_hashes_failure(True)
+
     def test_invalidate_hash_consolidation(self):
         def assert_consolidation(suffixes):
             # verify that suffixes are invalidated after consolidation
@@ -5501,7 +5776,9 @@
                 self.assertIn(suffix, hashes)
                 self.assertIsNone(hashes[suffix])
             with open(hashes_file, 'rb') as f:
-                self.assertEqual(hashes, pickle.load(f))
+                found_hashes = pickle.load(f)
+                self.assertTrue(hashes['valid'])
+                self.assertEqual(hashes, found_hashes)
             with open(invalidations_file, 'rb') as f:
                 self.assertEqual("", f.read())
             return hashes
@@ -5525,8 +5802,10 @@
             invalidations_file = os.path.join(
                 part_path, diskfile.HASH_INVALIDATIONS_FILE)
             with open(hashes_file, 'rb') as f:
-                self.assertEqual(original_hashes, pickle.load(f))
-            self.assertFalse(os.path.exists(invalidations_file))
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertTrue(found_hashes.pop('valid'))
+                self.assertEqual(original_hashes, found_hashes)
 
             # invalidate the hash
             with mock.patch('swift.obj.diskfile.lock_path') as mock_lock:
@@ -5537,30 +5816,28 @@
                 self.assertEqual(suffix + "\n", f.read())
             # hashes file is unchanged
             with open(hashes_file, 'rb') as f:
-                self.assertEqual(original_hashes, pickle.load(f))
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertTrue(found_hashes.pop('valid'))
+                self.assertEqual(original_hashes, found_hashes)
 
             # consolidate the hash and the invalidations
             hashes = assert_consolidation([suffix])
 
             # invalidate a different suffix hash in same partition but not in
             # existing hashes.pkl
-            i = 0
-            while True:
-                df2 = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o%d' % i,
-                                          policy=policy)
-                i += 1
-                suffix_dir2 = os.path.dirname(df2._datadir)
-                if suffix_dir != suffix_dir2:
-                    break
-
+            df2 = self.get_different_suffix_df(df)
             df2.delete(self.ts())
+            suffix_dir2 = os.path.dirname(df2._datadir)
             suffix2 = os.path.basename(suffix_dir2)
             # suffix2 should be in invalidations file
             with open(invalidations_file, 'rb') as f:
                 self.assertEqual(suffix2 + "\n", f.read())
             # hashes file is not yet changed
             with open(hashes_file, 'rb') as f:
-                self.assertEqual(hashes, pickle.load(f))
+                found_hashes = pickle.load(f)
+                self.assertTrue(hashes['valid'])
+                self.assertEqual(hashes, found_hashes)
 
             # consolidate hashes
             hashes = assert_consolidation([suffix, suffix2])
@@ -5573,10 +5850,96 @@
                 self.assertEqual("%s\n%s\n" % (suffix2, suffix2), f.read())
             # hashes file is not yet changed
             with open(hashes_file, 'rb') as f:
-                self.assertEqual(hashes, pickle.load(f))
+                found_hashes = pickle.load(f)
+                self.assertTrue(hashes['valid'])
+                self.assertEqual(hashes, found_hashes)
             # consolidate hashes
             assert_consolidation([suffix, suffix2])
 
+    def test_get_hashes_consolidates_suffix_rehash_once(self):
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            df.delete(self.ts())
+            suffix_dir = os.path.dirname(df._datadir)
+
+            with mock.patch.object(df_mgr, 'consolidate_hashes',
+                                   side_effect=df_mgr.consolidate_hashes
+                                   ) as mock_consolidate_hashes, \
+                    mock.patch.object(df_mgr, '_hash_suffix',
+                                      side_effect=df_mgr._hash_suffix
+                                      ) as mock_hash_suffix:
+                # creates pkl file
+                df_mgr.get_hashes('sda1', '0', [], policy)
+                mock_consolidate_hashes.assert_called_once()
+                self.assertEqual([mock.call(suffix_dir, diskfile.ONE_WEEK)],
+                                 mock_hash_suffix.call_args_list)
+                # second object in path
+                df2 = self.get_different_suffix_df(df)
+                df2.delete(self.ts())
+                suffix_dir2 = os.path.dirname(df2._datadir)
+                mock_consolidate_hashes.reset_mock()
+                mock_hash_suffix.reset_mock()
+                # updates pkl file
+                df_mgr.get_hashes('sda1', '0', [], policy)
+                mock_consolidate_hashes.assert_called_once()
+                self.assertEqual([mock.call(suffix_dir2, diskfile.ONE_WEEK)],
+                                 mock_hash_suffix.call_args_list)
+
+    def test_consolidate_hashes_raises_exception(self):
+        # verify that if consolidate_hashes raises an exception then suffixes
+        # are rehashed and a hashes.pkl is written
+        for policy in self.iter_policies():
+            self.logger.clear()
+            df_mgr = self.df_router[policy]
+            # create something to hash
+            df = df_mgr.get_diskfile('sda1', '0', 'a', 'c', 'o',
+                                     policy=policy)
+            df.delete(self.ts())
+            suffix_dir = os.path.dirname(df._datadir)
+            suffix = os.path.basename(suffix_dir)
+            # no pre-existing hashes.pkl
+            with mock.patch.object(df_mgr, '_hash_suffix',
+                                   return_value='fake hash'):
+                with mock.patch.object(df_mgr, 'consolidate_hashes',
+                                       side_effect=Exception()):
+                    hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertEqual({suffix: 'fake hash'}, hashes)
+
+            # sanity check hashes file
+            part_path = os.path.join(self.devices, 'sda1',
+                                     diskfile.get_data_dir(policy), '0')
+            hashes_file = os.path.join(part_path, diskfile.HASH_FILE)
+
+            with open(hashes_file, 'rb') as f:
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertTrue(found_hashes.pop('valid'))
+                self.assertEqual(hashes, found_hashes)
+
+            # sanity check log warning
+            warnings = self.logger.get_lines_for_level('warning')
+            self.assertEqual(warnings, ["Unable to read %r" % hashes_file])
+
+            # repeat with pre-existing hashes.pkl
+            with mock.patch.object(df_mgr, '_hash_suffix',
+                                   return_value='new fake hash'):
+                with mock.patch.object(df_mgr, 'consolidate_hashes',
+                                       side_effect=Exception()):
+                    hashes = df_mgr.get_hashes('sda1', '0', [], policy)
+            self.assertEqual({suffix: 'new fake hash'}, hashes)
+
+            # sanity check hashes file
+            part_path = os.path.join(self.devices, 'sda1',
+                                     diskfile.get_data_dir(policy), '0')
+            hashes_file = os.path.join(part_path, diskfile.HASH_FILE)
+            with open(hashes_file, 'rb') as f:
+                found_hashes = pickle.load(f)
+                found_hashes.pop('updated')
+                self.assertTrue(found_hashes.pop('valid'))
+                self.assertEqual(hashes, found_hashes)
+
     # invalidate_hash tests - error handling
 
     def test_invalidate_hash_bad_pickle(self):
@@ -5802,8 +6165,8 @@
             self.assertFalse(os.path.exists(df._datadir))
 
     def test_hash_suffix_one_reclaim_and_one_valid_tombstone(self):
+        paths, suffix = find_paths_with_matching_suffixes(2, 1)
         for policy in self.iter_policies():
-            paths, suffix = find_paths_with_matching_suffixes(2, 1)
             df_mgr = self.df_router[policy]
             a, c, o = paths[suffix][0]
             df1 = df_mgr.get_diskfile(
@@ -6597,6 +6960,71 @@
                                        policy)
             self.assertEqual(hashes, {})
 
+    def _test_get_hashes_race(self, hash_breaking_function):
+        for policy in self.iter_policies():
+            df_mgr = self.df_router[policy]
+
+            df = df_mgr.get_diskfile(self.existing_device, '0', 'a', 'c',
+                                     'o', policy=policy, frag_index=3)
+            suffix = os.path.basename(os.path.dirname(df._datadir))
+
+            df2 = self.get_different_suffix_df(df, frag_index=5)
+            suffix2 = os.path.basename(os.path.dirname(df2._datadir))
+            part_path = os.path.dirname(os.path.dirname(
+                os.path.join(df._datadir)))
+            hashfile_path = os.path.join(part_path, diskfile.HASH_FILE)
+            # create hashes.pkl
+            hashes = df_mgr.get_hashes(self.existing_device, '0', [],
+                                       policy)
+            self.assertEqual(hashes, {})  # sanity
+            self.assertTrue(os.path.exists(hashfile_path))
+            # and optionally tamper with the hashes.pkl...
+            hash_breaking_function(hashfile_path)
+            non_local = {'called': False}
+            orig_hash_suffix = df_mgr._hash_suffix
+
+            # then create a suffix
+            df.delete(self.ts())
+
+            def mock_hash_suffix(*args, **kwargs):
+                # capture first call to mock_hash
+                if not non_local['called']:
+                    non_local['called'] = True
+                    df2.delete(self.ts())
+                    non_local['other_hashes'] = df_mgr.get_hashes(
+                        self.existing_device, '0', [], policy)
+                return orig_hash_suffix(*args, **kwargs)
+
+            with mock.patch.object(df_mgr, '_hash_suffix', mock_hash_suffix):
+                hashes = df_mgr.get_hashes(self.existing_device, '0', [],
+                                           policy)
+
+            self.assertTrue(non_local['called'])
+            self.assertIn(suffix, hashes)
+            self.assertIn(suffix2, hashes)
+
+    def test_get_hashes_race_invalid_pickle(self):
+        def hash_breaking_function(hashfile_path):
+            # create a garbage invalid zero-byte file which can not unpickle
+            open(hashfile_path, 'w').close()
+        self._test_get_hashes_race(hash_breaking_function)
+
+    def test_get_hashes_race_new_partition(self):
+        def hash_breaking_function(hashfile_path):
+            # simulate rebalanced part doing post-rsync REPLICATE
+            os.unlink(hashfile_path)
+            part_dir = os.path.dirname(hashfile_path)
+            os.unlink(os.path.join(part_dir, '.lock'))
+            # sanity
+            self.assertEqual([], os.listdir(os.path.dirname(hashfile_path)))
+        self._test_get_hashes_race(hash_breaking_function)
+
+    def test_get_hashes_race_existing_partition(self):
+        def hash_breaking_function(hashfile_path):
+            # no-op - simulate ok existing partition
+            self.assertTrue(os.path.exists(hashfile_path))
+        self._test_get_hashes_race(hash_breaking_function)
+
     def test_get_hashes_hash_suffix_enotdir(self):
         for policy in self.iter_policies():
             df_mgr = self.df_router[policy]
@@ -6650,37 +7078,125 @@
             df_mgr = self.df_router[policy]
             # first create an empty pickle
             df_mgr.get_hashes(self.existing_device, '0', [], policy)
-            hashes_file = os.path.join(
-                self.devices, self.existing_device,
-                diskfile.get_data_dir(policy), '0', diskfile.HASH_FILE)
-            mtime = os.path.getmtime(hashes_file)
-            non_local = {'mtime': mtime}
-
+            non_local = {'suffix_count': 1}
             calls = []
 
-            def mock_getmtime(filename):
-                t = non_local['mtime']
+            def mock_read_hashes(filename):
+                rv = {'%03x' % i: 'fake'
+                      for i in range(non_local['suffix_count'])}
                 if len(calls) <= 3:
-                    # this will make the *next* call get a slightly
-                    # newer mtime than the last
-                    non_local['mtime'] += 1
+                    # this will make the *next* call get slightly
+                    # different content
+                    non_local['suffix_count'] += 1
                 # track exactly the value for every return
-                calls.append(t)
-                return t
-            with mock.patch('swift.obj.diskfile.getmtime',
-                            mock_getmtime):
+                calls.append(dict(rv))
+                rv['valid'] = True
+                return rv
+            with mock.patch('swift.obj.diskfile.read_hashes',
+                            mock_read_hashes):
                 df_mgr.get_hashes(self.existing_device, '0', ['123'],
                                   policy)
 
             self.assertEqual(calls, [
-                mtime + 0,  # read
-                mtime + 1,  # modified
-                mtime + 2,  # read
-                mtime + 3,  # modifed
-                mtime + 4,  # read
-                mtime + 4,  # not modifed
+                {'000': 'fake'},  # read
+                {'000': 'fake', '001': 'fake'},  # modification
+                {'000': 'fake', '001': 'fake', '002': 'fake'},  # read
+                {'000': 'fake', '001': 'fake', '002': 'fake',
+                 '003': 'fake'},  # modifed
+                {'000': 'fake', '001': 'fake', '002': 'fake',
+                 '003': 'fake', '004': 'fake'},  # read
+                {'000': 'fake', '001': 'fake', '002': 'fake',
+                 '003': 'fake', '004': 'fake'},  # not modifed
             ])
 
 
+class TestHashesHelpers(unittest.TestCase):
+
+    def setUp(self):
+        self.testdir = tempfile.mkdtemp()
+
+    def tearDown(self):
+        rmtree(self.testdir, ignore_errors=1)
+
+    def test_read_legacy_hashes(self):
+        hashes = {'stub': 'fake'}
+        hashes_file = os.path.join(self.testdir, diskfile.HASH_FILE)
+        with open(hashes_file, 'w') as f:
+            pickle.dump(hashes, f)
+        expected = {
+            'stub': 'fake',
+            'updated': -1,
+            'valid': True,
+        }
+        self.assertEqual(expected, diskfile.read_hashes(self.testdir))
+
+    def test_write_hashes_valid_updated(self):
+        hashes = {'stub': 'fake', 'valid': True}
+        now = time()
+        with mock.patch('swift.obj.diskfile.time.time', return_value=now):
+            diskfile.write_hashes(self.testdir, hashes)
+        hashes_file = os.path.join(self.testdir, diskfile.HASH_FILE)
+        with open(hashes_file) as f:
+            data = pickle.load(f)
+        expected = {
+            'stub': 'fake',
+            'updated': now,
+            'valid': True,
+        }
+        self.assertEqual(expected, data)
+
+    def test_write_hashes_invalid_updated(self):
+        hashes = {'valid': False}
+        now = time()
+        with mock.patch('swift.obj.diskfile.time.time', return_value=now):
+            diskfile.write_hashes(self.testdir, hashes)
+        hashes_file = os.path.join(self.testdir, diskfile.HASH_FILE)
+        with open(hashes_file) as f:
+            data = pickle.load(f)
+        expected = {
+            'updated': now,
+            'valid': False,
+        }
+        self.assertEqual(expected, data)
+
+    def test_write_hashes_safe_default(self):
+        hashes = {}
+        now = time()
+        with mock.patch('swift.obj.diskfile.time.time', return_value=now):
+            diskfile.write_hashes(self.testdir, hashes)
+        hashes_file = os.path.join(self.testdir, diskfile.HASH_FILE)
+        with open(hashes_file) as f:
+            data = pickle.load(f)
+        expected = {
+            'updated': now,
+            'valid': False,
+        }
+        self.assertEqual(expected, data)
+
+    def test_read_write_valid_hashes_mutation_and_transative_equality(self):
+        hashes = {'stub': 'fake', 'valid': True}
+        diskfile.write_hashes(self.testdir, hashes)
+        # write_hashes mutates the passed in hashes, it adds the updated key
+        self.assertIn('updated', hashes)
+        self.assertTrue(hashes['valid'])
+        result = diskfile.read_hashes(self.testdir)
+        # unpickling result in a new object
+        self.assertNotEqual(id(hashes), id(result))
+        # with the exactly the same value mutation from write_hashes
+        self.assertEqual(hashes, result)
+
+    def test_read_write_invalid_hashes_mutation_and_transative_equality(self):
+        hashes = {'valid': False}
+        diskfile.write_hashes(self.testdir, hashes)
+        # write_hashes mutates the passed in hashes, it adds the updated key
+        self.assertIn('updated', hashes)
+        self.assertFalse(hashes['valid'])
+        result = diskfile.read_hashes(self.testdir)
+        # unpickling result in a new object
+        self.assertNotEqual(id(hashes), id(result))
+        # with the exactly the same value mutation from write_hashes
+        self.assertEqual(hashes, result)
+
+
 if __name__ == '__main__':
     unittest.main()
diff -Nru swift-2.10.1/test/unit/obj/test_server.py swift-2.10.2/test/unit/obj/test_server.py
--- swift-2.10.1/test/unit/obj/test_server.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_server.py	2017-05-26 21:40:54.000000000 +0200
@@ -36,13 +36,14 @@
 from collections import defaultdict
 from contextlib import contextmanager
 
-from eventlet import sleep, spawn, wsgi, listen, Timeout, tpool, greenthread
+from eventlet import sleep, spawn, wsgi, Timeout, tpool, greenthread
 from eventlet.green import httplib
 
 from nose import SkipTest
 
 from swift import __version__ as swift_version
 from swift.common.http import is_success
+from test import listen_zero
 from test.unit import FakeLogger, debug_logger, mocked_http_conn, \
     make_timestamp_iter, DEFAULT_TEST_EC_TYPE
 from test.unit import connect_tcp, readuntil2crlfs, patch_policies, \
@@ -4259,7 +4260,7 @@
         self.assertEqual(outbuf.getvalue()[:4], '405 ')
 
     def test_chunked_put(self):
-        listener = listen(('localhost', 0))
+        listener = listen_zero()
         port = listener.getsockname()[1]
         killer = spawn(wsgi.server, listener, self.object_controller,
                        NullLogger())
@@ -4288,7 +4289,7 @@
         killer.kill()
 
     def test_chunked_content_length_mismatch_zero(self):
-        listener = listen(('localhost', 0))
+        listener = listen_zero()
         port = listener.getsockname()[1]
         killer = spawn(wsgi.server, listener, self.object_controller,
                        NullLogger())
@@ -6699,7 +6700,7 @@
         self.logger = debug_logger('test-object-server')
         self.app = object_server.ObjectController(
             self.conf, logger=self.logger)
-        sock = listen(('127.0.0.1', 0))
+        sock = listen_zero()
         self.server = spawn(wsgi.server, sock, self.app, utils.NullLogger())
         self.port = sock.getsockname()[1]
 
@@ -6823,9 +6824,12 @@
             self.assertIn(' 499 ', line)
 
     def find_files(self):
+        ignore_files = {'.lock', 'hashes.invalid'}
         found_files = defaultdict(list)
         for root, dirs, files in os.walk(self.devices):
             for filename in files:
+                if filename in ignore_files:
+                    continue
                 _name, ext = os.path.splitext(filename)
                 file_path = os.path.join(root, filename)
                 found_files[ext].append(file_path)
@@ -7366,7 +7370,7 @@
         self.df_mgr = diskfile.DiskFileManager(
             conf, self.object_controller.logger)
 
-        listener = listen(('localhost', 0))
+        listener = listen_zero()
         port = listener.getsockname()[1]
         self.wsgi_greenlet = spawn(
             wsgi.server, listener, self.object_controller, NullLogger())
diff -Nru swift-2.10.1/test/unit/obj/test_ssync.py swift-2.10.2/test/unit/obj/test_ssync.py
--- swift-2.10.1/test/unit/obj/test_ssync.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_ssync.py	2017-05-26 21:40:54.000000000 +0200
@@ -32,6 +32,7 @@
 from swift.obj.reconstructor import RebuildingECDiskFileStream, \
     ObjectReconstructor
 
+from test import listen_zero
 from test.unit import patch_policies, debug_logger, encode_frag_archive_bodies
 from test.unit.obj.common import BaseTest, FakeReplicator
 
@@ -66,7 +67,7 @@
         self.ts_iter = (Timestamp(t)
                         for t in itertools.count(int(time.time())))
         self.rx_ip = '127.0.0.1'
-        sock = eventlet.listen((self.rx_ip, 0))
+        sock = listen_zero()
         self.rx_server = eventlet.spawn(
             eventlet.wsgi.server, sock, self.rx_controller, self.rx_logger)
         self.rx_port = sock.getsockname()[1]
diff -Nru swift-2.10.1/test/unit/obj/test_ssync_receiver.py swift-2.10.2/test/unit/obj/test_ssync_receiver.py
--- swift-2.10.1/test/unit/obj/test_ssync_receiver.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_ssync_receiver.py	2017-05-26 21:40:54.000000000 +0200
@@ -33,7 +33,7 @@
 from swift.obj import ssync_receiver, ssync_sender
 from swift.obj.reconstructor import ObjectReconstructor
 
-from test import unit
+from test import listen_zero, unit
 from test.unit import debug_logger, patch_policies, make_timestamp_iter
 
 
@@ -1933,7 +1933,6 @@
     # server socket.
 
     def setUp(self):
-        self.rx_ip = '127.0.0.1'
         # dirs
         self.tmpdir = tempfile.mkdtemp()
         self.tempdir = os.path.join(self.tmpdir, 'tmp_test_obj_server')
@@ -1948,7 +1947,8 @@
         }
         self.rx_logger = debug_logger('test-object-server')
         rx_server = server.ObjectController(self.conf, logger=self.rx_logger)
-        self.sock = eventlet.listen((self.rx_ip, 0))
+        self.rx_ip = '127.0.0.1'
+        self.sock = listen_zero()
         self.rx_server = eventlet.spawn(
             eventlet.wsgi.server, self.sock, rx_server, utils.NullLogger())
         self.rx_port = self.sock.getsockname()[1]
diff -Nru swift-2.10.1/test/unit/obj/test_updater.py swift-2.10.2/test/unit/obj/test_updater.py
--- swift-2.10.1/test/unit/obj/test_updater.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/obj/test_updater.py	2017-05-26 21:40:54.000000000 +0200
@@ -23,11 +23,13 @@
 from gzip import GzipFile
 from tempfile import mkdtemp
 from shutil import rmtree
+from test import listen_zero
 from test.unit import FakeLogger
+from test.unit import debug_logger, patch_policies, mocked_http_conn
 from time import time
 from distutils.dir_util import mkpath
 
-from eventlet import spawn, Timeout, listen
+from eventlet import spawn, Timeout
 from six.moves import range
 
 from swift.obj import updater as object_updater
@@ -38,7 +40,6 @@
 from swift.common.header_key_dict import HeaderKeyDict
 from swift.common.utils import hash_path, normalize_timestamp, mkdirs, \
     write_pickle
-from test.unit import debug_logger, patch_policies, mocked_http_conn
 from swift.common.storage_policy import StoragePolicy, POLICIES
 
 
@@ -304,7 +305,7 @@
                          {'failures': 1, 'unlinks': 1})
         self.assertIsNone(pickle.load(open(op_path)).get('successes'))
 
-        bindsock = listen(('127.0.0.1', 0))
+        bindsock = listen_zero()
 
         def accepter(sock, return_code):
             try:
@@ -362,7 +363,7 @@
         self.assertEqual([0],
                          pickle.load(open(op_path)).get('successes'))
 
-        event = spawn(accept, [404, 500])
+        event = spawn(accept, [404, 201])
         cu.logger._clear()
         cu.run_once()
         err = event.wait()
@@ -371,7 +372,7 @@
         self.assertTrue(os.path.exists(op_path))
         self.assertEqual(cu.logger.get_increment_counts(),
                          {'failures': 1})
-        self.assertEqual([0, 1],
+        self.assertEqual([0, 2],
                          pickle.load(open(op_path)).get('successes'))
 
         event = spawn(accept, [201])
diff -Nru swift-2.10.1/test/unit/proxy/test_server.py swift-2.10.2/test/unit/proxy/test_server.py
--- swift-2.10.1/test/unit/proxy/test_server.py	2016-12-12 18:58:21.000000000 +0100
+++ swift-2.10.2/test/unit/proxy/test_server.py	2017-05-26 21:40:54.000000000 +0200
@@ -42,21 +42,19 @@
 import uuid
 
 import mock
-from eventlet import sleep, spawn, wsgi, listen, Timeout, debug
+from eventlet import sleep, spawn, wsgi, Timeout, debug
 from eventlet.green import httplib
 from six import BytesIO
 from six import StringIO
 from six.moves import range
 from six.moves.urllib.parse import quote
 
-from swift.common.utils import hash_path, storage_directory, \
-    parse_content_type, parse_mime_headers, \
-    iter_multipart_mime_documents, public
-
+from test import listen_zero
 from test.unit import (
     connect_tcp, readuntil2crlfs, FakeLogger, fake_http_connect, FakeRing,
     FakeMemcache, debug_logger, patch_policies, write_fake_ring,
     mocked_http_conn, DEFAULT_TEST_EC_TYPE, make_timestamp_iter)
+from test.unit.helpers import setup_servers, teardown_servers
 from swift.proxy import server as proxy_server
 from swift.proxy.controllers.obj import ReplicatedObjectController
 from swift.obj import server as object_server
@@ -66,7 +64,9 @@
 from swift.common.exceptions import ChunkReadTimeout, DiskFileNotExist, \
     APIVersionError, ChunkWriteTimeout
 from swift.common import utils, constraints
-from swift.common.utils import mkdirs, NullLogger
+from swift.common.utils import hash_path, storage_directory, \
+    parse_content_type, parse_mime_headers, \
+    iter_multipart_mime_documents, public, mkdirs, NullLogger
 from swift.common.wsgi import monkey_patch_mimetools, loadapp
 from swift.proxy.controllers import base as proxy_base
 from swift.proxy.controllers.base import get_cache_key, cors_validation, \
@@ -80,8 +80,6 @@
 import swift.common.request_helpers
 from swift.common.request_helpers import get_sys_meta_prefix
 
-from test.unit.helpers import setup_servers, teardown_servers
-
 # mocks
 logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
 
@@ -8558,7 +8556,7 @@
 
     def setUp(self):
         global _test_sockets
-        self.prolis = prolis = listen(('localhost', 0))
+        self.prolis = prolis = listen_zero()
         self._orig_prolis = _test_sockets[0]
         allowed_headers = ', '.join([
             'content-encoding',

Reply to: