[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#510544: Installer/partition guide tried to use 500GB as swap



Frans Pop <elendil@planet.nl> writes:

> On Thursday 08 January 2009, Ferenc Wagner wrote:
>
>> Looks like it's a problem with conversion to logical extents.
>> After running the recipe, perform_recipe_by_lvm settles on
>>
>> 3246000 0 3246000 linux-swap ...
>>
>> that is, on a 3GB swap, which is correct (300% of the 1GB RAM).
>> However, it issues
>>
>> lv_create noc2 swap_1 1406
>>
>> to create it, thus creating a 1406 extent big (5.6GB) LV.  This is to
>> use up all the remaining extents.  So the main LV got a little bit too
>> small...  Roundoff, maybe?  TBC.
>
> Could also be an overflow maybe.

Not roundoff and neither overflow: mismatch of units of measure.  You
know, that which cost NASA a Mars orbiter in 1999. :) This is not the
original problem (that's that the maximum automatic LV size is 1 TB in
the current calculation scheme), but I think it's worth fixing
regardless.  It may even be the reason for the current use of the
restrictive kB-based scheme in case of LVM, if nobody knows better.

To the point: perform_recipe_by_lvm determines free space in binary
kilobytes [kiB] by:

free_size=$(vgs -o vg_free --units k --noheading --nosuffix $VG_name | sed -e 's/\..*//g'

so expand_scheme determines partition sizes in kiB as well (even
though originally it had them in MB and then scaled to kB -- for some
unknown reason -- before starting the whole process).

Then lvm_extents_from_human in lvm-base.sh assumes SI kilobytes [kB]
to compute the number of extents to create the LV with, which thus
gets a factor of 1024/1000 smaller than expected.  This wouldn't be
noticeable in itself, as the difference is small compared to the size
of the LV, but it's about the size of the swap LV, which is created
last and uses up all remaining space and thus gets twice as large as
expected.

Sorry for the painstaking detail.  The fix is to use kB units in
calculating the free space:

free_size=$(vgs -o vg_free --units K --noheading --nosuffix $VG_name | sed -e 's/\..*//g'


Now.  If at the same time I scrap the whole kB scheme and stick to MB
as usual, the original problem disappears, too.  With the attached
patch I get the following result, which I consider perfect:

LVM VG noc2, LV root - 1.6 TB Linux device-mapper (linear)
>     #1             1.6 TB     f  ext3    /
LVM VG noc2, LV swap_1 - 3.2 GB Linux device-mapper (linear)
>     #1             3.2 GB     f  swap    swap
Virtual disk 1 (xvda) - 1.6 TB Xen Virtual Block Device
>     #1  primary  255.0 MB  B  f  ext2    /boot
>     #2  primary    1.6 TB     K  lvm

It also works well on a 100 GB disk.  On a 1 GB disk, the original
code gives this:

LVM VG noc2, LV root - 687.9 MB Linux device-mapper (linear)
>     #1           687.9 MB     f  ext3    /
LVM VG noc2, LV swap_1 - 125.8 MB Linux device-mapper (linear)
>     #1           125.8 MB     f  swap    swap
Virtual disk 1 (xvda) - 1.1 GB Xen Virtual Block Device
>     #1  primary  255.0 MB  B  f  ext2    /boot
>     #2  primary  814.3 MB     K  lvm

while the MB based results in:

LVM VG noc2, LV root - 704.6 MB Linux device-mapper (linear)
>     #1           704.6 MB     f  ext3    /
LVM VG noc2, LV swap_1 - 109.1 MB Linux device-mapper (linear)
>     #1           109.1 MB     f  swap    swap
Virtual disk 1 (xvda) - 1.1 GB Xen Virtual Block Device
>     #1  primary  255.0 MB  B  f  ext2    /boot
>     #2  primary  814.3 MB     K  lvm

I have no base to decide which is better, but there isn't too much
difference; I can't consider either one wrong.

The following patch contains an unnecessary change as well: lcreate
can use up all remaining space by itself, there is no reason to work
on determining that by hand from the VG info.  Just use 100%FREE.

Now how to proceed?

Cheers,
Feri.

--- perform_recipe_by_lvm.orig	2008-11-24 19:33:02.423155954 +0100
+++ perform_recipe_by_lvm	2009-01-09 19:57:42.665679233 +0100
@@ -28,26 +28,7 @@
 
 db_progress STEP 1
 
-# expand_scheme can't cope with decimal and lvm overhead.
-# lvm overhead gets calculated properly only using kbytes.
-# Switch to that. The worst scenario is that the last partition will be one PE smaller,
-# but in the big numbers it's nothing (4MB).
-free_size=$(vgs -o vg_free --units k --noheading --nosuffix $VG_name | sed -e 's/\..*//g')
-
-newscheme=''
-foreach_partition '
-	newmin="${1}000"
-	newmed="${2}000"
-	if [ "$3" != "1000000000" ]; then
-		newmax="${3}000"
-	else
-		newmax="$3"
-	fi
-	shift; shift; shift
-	newscheme="$newscheme${NL}$newmin $newmed $newmax $*"
-'
-
-scheme="$newscheme"
+free_size=$(vgs -o vg_free --units M --noheading --nosuffix $VG_name | sed -e 's/\..*//g')
 
 db_progress STEP 1
 
@@ -86,10 +67,9 @@
 	fi
 
 	if [ "$last" = yes ]; then
-		vg_get_info "$VG_name"
-		lv_create $VG_name "$lvname" $FREEPE || autopartitioning_failed
+		lv_create $VG_name "$lvname" 100%FREE || autopartitioning_failed
 	else
-		extents=$(lvm_extents_from_human $VG_name "${1}K")
+		extents=$(lvm_extents_from_human $VG_name "${1}M")
 		lv_create $VG_name "$lvname" $extents || autopartitioning_failed
 	fi
 



Reply to: