[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: LVM partitions not mounting after upgrade



On 02/28/18 07:28, Andy Pont wrote:
Hello,

Today I have upgraded the third of our three Debian servers from Jessie (8.10) to Stretch (9.3) and whilst the first two went without a problem the final one only boots to the maintenance mode prompt.

This particular server uses an Intel motherboard and has a 4 disk raid array (mirrored and striped) created using the BIOS in which exists (excuse the terminology if it isn’t quite correct) a LVM physical volume and three partitions (/opt, /home and /var).

When booting it sits for 90 seconds flashing messages of the form:

Start job running for dev-mapper-sdcserver/x2dvar.device
Start job running for dev-mapper-sdcserver/x2dopt.device
Start job running for dev-mapper-sdcserver/x2dhome.device

After the 90 seconds these turn into "Timed out waiting for..." messages and I get presented with Control-D maintenance mode prompt.

Looking in /dev there is the /dev/md126 device for the raid array but there are no /dev/dm-X entries and no /dev/vg_sdcserver as I see on a similar machine that has a similar setup.

When I try to investigate with commands such as pvcreate or vgchange, in test mode, then they all show messages about duplicates.

Could someone guide me how to recreate the necessary files in /dev so I can mount these volumes and boot the server?

Thanks,

-Andy.

https://lists.debian.org/debian-user/2018/03/msg00005.html


What is "sdcserver"?  Secondary Domain Controller?


What is the model of the Intel motherboard?


Is the RAID controller on the motherboard or a card? If card, what is the make and model?


What are the makes and models of the disks?


If you created the RAID10 using the BIOS, did Jesse see one physical disk or did you need to install additional software?


Is root on the RAID10 or on other disk(s)?


David


Reply to: