Software RAID problem - disks names change in case one fails
I'm testing a server before I put it in production, and I've got a problem with
- Dell PowerEdge 800
- 4 x 250 Go SATA attached to the mobo
- /boot 4 x 1 GB (1 GB available)in RAID1, 3 active + 1 spare
- / 4 x 250 GB (500 GB available) in RAID5, 3 active + 1 spare
No problems at install, and the server runs OK.
Then I stop the server and remove /dev/sdb to simulate a hard disk failure that
has caused a crash and a reboot.
With the second disk removed the disks names are changed, the 3rd disk /dev/sdc
becomes /dev/sdb and the 4th disk (that was the spare disk) /dev/sdd becomes
During the boot process md detects that there is a problem, but then complains
it can't find the /dev/sdd spare disk and the boot process stops with a kernel