Re: RAID5 (mdadm) array hosed after grow operation (there are two of us)
On Tue, 21 Apr 2009, Alex Samad wrote:
> > Learned my lesson though - no real reason to have root on lvm - it's now
> > on 3-disk RAID 1.
>
> all ways thought this, KISS
Exactly. I have servers with 4, sometimes 6-disk RAID1 root partitions,
because of KISS: all disks in the raid set should be equally configured (and
these are boxes running raid1 on a partition for /, and raid10 on another
for a lvm PV).
As always, you MUST forbid lvm of ever touching md component devices even if
md is offline, and that includes whatever crap is inside initrds...
With the need to regenerate initrds due to mdadm, lvm, and other such
easy-to-forget issues, we abolished the use of initrds in the datacenter.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Reply to: