[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Major large-disk bug for cloud-utils / cloud-initramfs-growroot



Please don't top-quote.

On 10/22/2014 01:49 PM, Jimmy Kaplowitz wrote:
> In this case, the current behavior is actively destructive to the
> partition table on disks over 2 TB - critical data loss bugs would be RC
> bugs in scope for a stable update, and so are also probably in scope for
> work at this point in the process.

I don't agree with the above. In the cloud case, there's no data loss if
you boot a VM with an image, what happens is that it simply doesn't work
in the worst case, but it will not destroy your HDD image.

> While I agree that maxing out the
> partition at 2 TB is far from ideal, it's much better than the status
> quo of destroying the usability of the disk. We don't have a way to use
> the disk size in deciding whether to install cloud-initramfs-growroot;
> while our images are 10 GB (sparse), they are often used with much
> larger disks.

FYI, the commit that brings the "do not destroy the HDD if it's > 2TB"
goes this way:

diff --git a/bin/growpart b/bin/growpart
index bbac2fe..49a1792 100755
--- a/bin/growpart
+++ b/bin/growpart
@@ -160,6 +160,7 @@ mbr_resize() {
 	local dump_mod=${TEMP_D}/dump.mod
 	local tmp="${TEMP_D}/tmp.out"
 	local err="${TEMP_D}/err.out"
+	local mbr_max_512="4294967296"

 	local _devc cyl _w1 heads _w2 sectors _w3 tot dpart
 	local pt_start pt_size pt_end max_end new_size change_info
@@ -174,6 +175,9 @@ mbr_resize() {
 	tot=$((${cyl}*${heads}*${sectors}))

 	debug 1 "geometry is ${MBR_CHS}. total size=${tot}"
+	[ "$tot" -gt "$mbr_max_512" ] &&
+		debug 1 "WARN: disk is larger than 2TB. additional space will go unused."
+
 	rqe sfd_dump sfdisk ${MBR_CHS} --unit=S --dump "${DISK}" \
 		>"${dump_out}" ||
 		fail "failed to dump sfdisk info for ${DISK}"
@@ -217,6 +221,10 @@ mbr_resize() {
 		[ -n "${max_end}" ] ||
 		fail "failed to get max_end for partition ${PART}"

+	if [ "$max_end" -gt "$mbr_max_512" ]; then
+		max_end=$mbr_max_512;
+	fi
+
 	debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}" \
 		"pt_start=${pt_start} pt_size=${pt_size}"
 	[ $((${pt_end})) -eq ${max_end} ] &&

In the current version of cloud-utils, there's no such thing as
"max_end", and the way to patch really isn't trivial (please read the
source code and try to make sense out of it, then you tell me what you
think...).

If you think you can do it, then please provide a patch against the
current version of cloud-utils. If you can't, then please don't open an
RC bug for it.

> It's already a common customer desire on GCE to want disks over 2 TB,
> such that we have it in our test suite (which caught this issue). During
> the Jessie lifecycle, this need will grow across all environments.

Well, once more, you will *not* be able to use a 2TB disk by backporting
the above patch. What we need is the latest upstream release which
supports GPT, and we need that to be uploaded tomorrow to reach Jessie
before the freeze.

> There's no way we can safely use cloud-initramfs-growroot with the
> current cloud-utils in stock GCE images

There's a way: do not use a disk bigger than 2TB.

> and other clouds which offer
> large physical disks should probably be cautious in the same way. If
> adopting upstream's fix is deemed infeasible for Jessie and
> wheezy-backports, at least put in a minimal equivalent fix of your own
> design where it maxes out at 2 TB. Anything somewhat sane and
> non-destructive is better than automated accidental destruction.

I agree. Though again, that's not an easy fix. Let me know if you think
you can do it yourself. Also, I would welcome if you could test latest
upstream release ASAP, and let us know if it works well enough. In such
case, I may reconsider the upgrade.

Cheers,

Thomas Goirand (zigo)

P.S: I'm registered to the list, please don't CC me.


Reply to: