[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Buster: problem changing partition size on a RAID 5 array



On 14/08/17 01:58 PM, Pascal Hambourg wrote:
Le 14/08/2017 à 06:32, Gary Dale a écrit :

Disk /dev/md1: 39068861440 sectors, 18.2 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EFF29D11-D982-4933-9B57-B836591DEF02
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31255089118
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You created a GPT partition table on the md array. When the table is created, the first and last usable sector numbers (depending on the device size at creation time) are recorded in the GPT header and define the total available space. The reason is because before the first available sector and after the last available sector are the two copies of the partition table. So changing the device size is not enough : you need to move the secondary partition table at the new end and adjust the last usable sector number.

That still sounds like a bug. If I did a DD from a smaller to a larger hard disk then used gdisk, I'd expect it to see the new drive size and handle it correctly. In fact it did notice that the md array was larger but didn't update its tables.

Neither fdisk nor gdisk let me create a partition larger than the existing one. Nor do they let me create a new partition anywhere except in the first 2047 sectors.

Because they rely on the GPT header to determine the available space.
With gdisk you could have used the "v" command to verify the disk and adjust the partition table to the new size. I don't know if fdisk can do it too.
Again, gdisk does appear to know that the device is larger than its tables indicate but doesn't update its tables. At the very least, I'd expect it to produce a message telling me about the issue and suggesting a resolution the way gparted did.


I'm not sure what the problem is that gparted was able to see but fdisk and gdisk couldn't and whether this is a bug in mdadm or something else, but I thought I should report it somewhere.

In the first place, the bug was to create a partition table on an md array. Almost nobody does this and I can see no value in doing it. It is useless. If you want to use the whole array as a single volume, don't partition it. If you want to create multiple volumes, use LVM as most people still do even after md arrays could be partitioned. It is much more flexible than partitions.

The reason its rare is more likely that Linux hasn't been able to boot from mdadm partitions until recently. I'm one of those people who see little value in LVM. It just adds complexity without doing anything that a little planning could usually avoid. Of course, I'm not running a large datacentre with the need to frequently reallocate disk space on the fly...

For me, booting from a partitioned RAID array makes more sense. It adds no extra programs that can be hacked and that add to the system overhead while still allowing me to divide up the available disk space as if it were a single drive.

Creating multiple RAID arrays seems like the less desirable solution since they'd either require more drives or make resizing more complicated (depending on whether you were creating one array per group of drives or multiple arrays on each group of drives).

Being old school, I also note that the RAID controllers from the 1990s did pretty much the same thing. You'd create the arrays on a bunch of disks through the controller utility then use the OS to partition the arrays.


Reply to: