[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OpenVPN fails



 Hi.

On Wed, Oct 07, 2015 at 02:56:49PM +0100, Tony van der Hoff wrote:
> On 06/10/15 19:00, Reco wrote:
> >> 1) Those should work just fine, and fix the trouble somewhat:
> >>
> >> mdadm --add /dev/md0 /dev/sda5
> >> mdadm --add /dev/md1 /dev/sda6
> >> mdadm --add /dev/md3 /dev/sda8
> >> mdadm --add /dev/md4 /dev/sda9
> >> mdadm --add /dev/md6 /dev/sda11
> >>
> >> 2) A big warning - wait for RAID rebuild to finish before rebooting.
> >> Really. I mean it. Monitor the rebuild progress via /proc/mdstat.
> >>
> >> 3) Reboot to check that you're using correct kernel version.
> >>
> Done that -- it took a while to rebuild md6. I took the opportunity to
> delve into some man pages relating to this. The learning curve is steep!
> 
> mdstat now shows those partitions as [UU].
> 
> the kernel is now shown as 3.2.68-1+deb7u4 -- whoopee!

Just as planned.


> The grub menu still shows the old kernel :(

Elaborate here, please. Grub was never able to show Debian-specific
kernel versions (as in "uname -v"). It should show ordinary one (as in
"uname -r").


> >> Which leaves us with /dev/md2 and /dev/md5.
> >> These use /dev/sdb7 and /dev/sdb10, respectively.
> >> Sadly, both have size 9999M, and their respective pairs (sda7 and sda10)
> >> have only 4999M, so you won't be able to add them to RAID1.
> >>
> Understood.
> 
> This is now becoming scary stuff. My /home, and some other bits are
> backed up, but rebuilding the system would be painful.

It really depends on what you have on those. My telepathic skills aren't
leet enough to match UUIDs (/etc/fstab) with /dev/mdX, sorry :(

Losing, say, /var in a event of disk crash can be really painful.
Losing, say, /srv in the same circumstances usually leads to a quick CV
update and going out of there ASAP.


> >>
> >> Hard way:
> >>
> >> 1) Destroy partition table on /dev/sda ("parted rm", for example).
> >>
> parted --rm /dev/sda10 ?
> parted --rm /dev/sda7 ?

Sorry. Hard way is to be hardcore, and deleting chosen partitions is not
hardcore enough ;)

I meant that you should delete all 11 partitions from /dev/sda.
On second thought, leaving /dev/sda1 (extended one) should do no harm as
it equals in size to /dev/sdb1.


> /dev/sdb7 is swap -- presumably it's not normal to have swap on raid?
> Should I include it?

It depends on your availability requirements, pardon my corporate speak.
Losing swap on heavy-loaded system equals to kernel panic.
Mirroring the swap can mitigate this.


> >> 2) Copy partition table from /dev/sdb to /dev/sda ("parted mkpart").
> >> Ensure that both drives really have the same partition tables.
> >>
> Is there any reason to use this remove/recreate method, rather than
> going for parted resize?

I don't trust "parted resizepart". A personal quirk of mine, if you
will. Way too many good filesystems were irrecoverably damaged back in
the day :)
I would not risk it if I were you. Mixing mdadm with partition resize
can produce really funny results.

"mdadm --fail" → "mdadm --remove" → remove partitions on /dev/sda →
create partitions on /dev/sda → "mdadm --add"

Is safe, proven to be working method. Personally proven, I might add.


> >> 3) Add partitions from /dev/sda to respective RAIDs.
> >>
> As before, presumably?

See above.


> >> 4) Wait for RAID rebuild to finish. That's really important part.
> >>
> Understood.
> 
> >> 5) Re-install grub to /dev/sda just in case.
> >>
> How would I do that?
> apt-get install --reinstall grub?

No, that's overkill. This should be enough:

grub-install /dev/sda
grub-install /dev/sdb


> >> 6) Reboot.
> >>
> >> 7) Check /proc/mdstat to ensure that there are no degraded RAIDs this
> >> time. Check kernel version while you're at it.
> >>
> >> 8) ...
> >>
> >> 9) Profit.
> > 
> > PS. Almost forgot it. Applies to both cases.
> > 
> > Update /etc/mdadm/mdadm.conf after RAID rebuild.
> What should I change? Is there an "update" method in mdadm?

Run "mdadm --scan --detail". Observe that your arrays consist of UUIDs.
Check /etc/mdadm/mdadm.conf for those UUIDs.
Compare results.
Update /etc/mdadm/mdadm.conf if discrepancy is found.

The best part - do not do anything if your /etc/mdadm/mdadm.conf is
empty (you're using auto-assemble feature of mdraid, so there's no need
to tinker with anything).


> > Update initrd after updating /etc/mdadm/mdadm.conf.
> > 
> I'm afraid that's beyond my expertise

update-initramfs -k all -u

It's all in the handbook, really. All it takes is to read it once.

https://debian-handbook.info/


> Thanks for sticking with this.

You're welcome. A man needs to have his fun after a day of boring job,
after all :)

Reco


Reply to: