[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#868875: Status update



It seems like there are a lot of bug fixes in the latest version: https://github.com/canonical/cloud-utils/blob/master/bin/growpart

If there is a desire to only fix this >2TB disk bug and not the others, I've extracted the patch for the 2TB limit and am attaching it here.

Cheers,
Adam


On April 26, 2021 1:57:40 AM CDT, Thomas Goirand <zigo@debian.org> wrote:
Hi,

Thanks for the information.

On 4/26/21 2:29 AM, adam@hax0rbana.org wrote:
[...]
So I believe this is fixed upstream and I'd love to help get the patched
version accepted into buster. If anyone can let me know how I can help
make this happen, please let me know.

Well, easy, just send a patch, and someone (probably me) will add it to
the current package, then ping the Stable release team to ask if they
find it acceptable for Buster.

Cheers,

Thomas Goirand

--
Sent from my iPod. Please excuse my brevity.
--- growpart	2021-04-28 04:07:32.105999995 +0000
+++ growpart	2021-04-28 04:17:09.269811349 +0000
@@ -282,18 +282,22 @@
 		[ -n "${max_end}" ] ||
 		fail "failed to get max_end for partition ${PART}"
 
-	mbr_max_sectors=$((mbr_max_512*$((sector_size/512))))
-	if [ "$max_end" -gt "$mbr_max_sectors" ]; then
-		max_end=$mbr_max_sectors
-	fi
-
 	if [ "$format" = "gpt" ]; then
 		# sfdisk respects 'last-lba' in input, and complains about
 		# partitions that go past that.  without it, it does the right thing.
 		sed -i '/^last-lba:/d' "$dump_out" ||
 			fail "failed to remove last-lba from output"
 	fi
-
+	if [ "$format" = "dos" ]; then
+		mbr_max_sectors=$((mbr_max_512*$((sector_size/512))))
+		if [ "$max_end" -gt "$mbr_max_sectors" ]; then
+			max_end=$mbr_max_sectors
+		fi
+		[ $(($disk_size/512)) -gt $mbr_max_512 ] &&
+			debug 0 "WARNING: MBR/dos partitioned disk is larger than 2TB." \
+				"Additional space will go unused."
+	fi
+ 
 	local gpt_second_size="33"
 	if [ "${max_end}" -gt "$((${sector_num}-${gpt_second_size}))" ]; then
 		# if mbr allow subsequent conversion to gpt without shrinking the

Reply to: