[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Major large-disk bug for cloud-utils / cloud-initramfs-growroot





On Wed, Oct 22, 2014 at 8:53 AM, Thomas Goirand <zigo@debian.org> wrote:
>
> Please don't top-quote.
>
> On 10/22/2014 01:49 PM, Jimmy Kaplowitz wrote:
> > In this case, the current behavior is actively destructive to the
> > partition table on disks over 2 TB - critical data loss bugs would be RC
> > bugs in scope for a stable update, and so are also probably in scope for
> > work at this point in the process.
>
> I don't agree with the above. In the cloud case, there's no data loss if
> you boot a VM with an image, what happens is that it simply doesn't work
> in the worst case, but it will not destroy your HDD image.
>
> > While I agree that maxing out the
> > partition at 2 TB is far from ideal, it's much better than the status
> > quo of destroying the usability of the disk. We don't have a way to use
> > the disk size in deciding whether to install cloud-initramfs-growroot;
> > while our images are 10 GB (sparse), they are often used with much
> > larger disks.
>
> FYI, the commit that brings the "do not destroy the HDD if it's > 2TB"
> goes this way:
>
> diff --git a/bin/growpart b/bin/growpart
> index bbac2fe..49a1792 100755
> --- a/bin/growpart
> +++ b/bin/growpart
> @@ -160,6 +160,7 @@ mbr_resize() {
>         local dump_mod=${TEMP_D}/dump.mod
>         local tmp="${TEMP_D}/tmp.out"
>         local err="${TEMP_D}/err.out"
> +       local mbr_max_512="4294967296"
>
>         local _devc cyl _w1 heads _w2 sectors _w3 tot dpart
>         local pt_start pt_size pt_end max_end new_size change_info
> @@ -174,6 +175,9 @@ mbr_resize() {
>         tot=$((${cyl}*${heads}*${sectors}))
>
>         debug 1 "geometry is ${MBR_CHS}. total size=${tot}"
> +       [ "$tot" -gt "$mbr_max_512" ] &&
> +               debug 1 "WARN: disk is larger than 2TB. additional space will go unused."
> +
>         rqe sfd_dump sfdisk ${MBR_CHS} --unit=S --dump "${DISK}" \
>                 >"${dump_out}" ||
>                 fail "failed to dump sfdisk info for ${DISK}"
> @@ -217,6 +221,10 @@ mbr_resize() {
>                 [ -n "${max_end}" ] ||
>                 fail "failed to get max_end for partition ${PART}"
>
> +       if [ "$max_end" -gt "$mbr_max_512" ]; then
> +               max_end=$mbr_max_512;
> +       fi
> +
>         debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}" \
>                 "pt_start=${pt_start} pt_size=${pt_size}"
>         [ $((${pt_end})) -eq ${max_end} ] &&
>
> In the current version of cloud-utils, there's no such thing as
> "max_end", and the way to patch really isn't trivial (please read the
> source code and try to make sense out of it, then you tell me what you
> think...).

Hmm... There is max_end in the current version. How about this:

--- cloud-utils-0.26/bin/growpart.orig    2014-10-22 09:23:07.455279618 +0200
+++ cloud-utils-0.26/bin/growpart    2014-10-22 09:26:46.627273142 +0200
@@ -119,6 +119,7 @@ tmp="${TEMP_D}/tmp.out"
 err="${TEMP_D}/err.out"
 orig_bin="${TEMP_D}/orig.save"
 RESTORE_HUMAN="${TEMP_D}/recovery"
+mbr_max_512="4294967296"
 
 # --show-pt-geometry outputs something like
 #     /dev/sda: 164352 cylinders, 4 heads, 32 sectors/track
@@ -130,6 +131,8 @@ sfdisk "${disk}" --show-pt-geometry > "$
 tot=$((${cyl}*${heads}*${sectors}))
 
 debug 1 "geometry is $CHS. total size=${tot}"
+[ "$tot" -gt "$mbr_max_512" ] &&
+    debug 1 "WARN: disk is larger than 2TB. additional space will go unused."
 sfdisk ${CHS} -uS -d "${disk}" > "${dump_out}" 2>"${err}" ||
     fail "failed to dump sfdisk info for ${disk}"
 
@@ -166,6 +169,10 @@ max_end=$(awk '
     [ -n "${max_end}" ] ||
     fail "failed to get max_end for partition ${part}"
 
+if [ "$max_end" -gt "$mbr_max_512" ]; then
+    max_end=$mbr_max_512
+fi
+
 debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end} pt_start=${pt_start} pt_size=${pt_size}"
 [ $((${pt_end})) -eq ${max_end} ] &&
     nochange "partition ${part} is size ${pt_size}. it cannot be grown"


> If you think you can do it, then please provide a patch against the
> current version of cloud-utils. If you can't, then please don't open an
> RC bug for it.
>
> > It's already a common customer desire on GCE to want disks over 2 TB,
> > such that we have it in our test suite (which caught this issue). During
> > the Jessie lifecycle, this need will grow across all environments.
>
> Well, once more, you will *not* be able to use a 2TB disk by backporting
> the above patch. What we need is the latest upstream release which
> supports GPT, and we need that to be uploaded tomorrow to reach Jessie
> before the freeze.
>
> > There's no way we can safely use cloud-initramfs-growroot with the
> > current cloud-utils in stock GCE images
>
> There's a way: do not use a disk bigger than 2TB.
>
> > and other clouds which offer
> > large physical disks should probably be cautious in the same way. If
> > adopting upstream's fix is deemed infeasible for Jessie and
> > wheezy-backports, at least put in a minimal equivalent fix of your own
> > design where it maxes out at 2 TB. Anything somewhat sane and
> > non-destructive is better than automated accidental destruction.
>
> I agree. Though again, that's not an easy fix. Let me know if you think
> you can do it yourself. Also, I would welcome if you could test latest
> upstream release ASAP, and let us know if it works well enough. In such
> case, I may reconsider the upgrade.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747229 should have taken care of upgrading to 0.27 but it fell through the cracks and I didn't follow up on it. Bummer.

...Juerg


> Cheers,
>
> Thomas Goirand (zigo)
>
> P.S: I'm registered to the list, please don't CC me.
>
>
> --
> To UNSUBSCRIBE, email to debian-cloud-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> Archive: [🔎] 54475473.2050203@debian.org">https://lists.debian.org/[🔎] 54475473.2050203@debian.org
>

Reply to: