[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: HELP! Re: How to fix I/O errors? (SOLVED)



On 02/12/2017 08:30 AM, Marc Auslander wrote:
I do not use LVM over raid 1.  I think it can be made to work,
although IIRC booting from an LVM over RAID partion has caused issues.
my boot partitions are separate.  They are not under LVM.
LVM is useful when space requirements are changing over time and the
ability to add additional disks and grow logical partions is needed.
In my case, that isn't an issue.  I have only a small number of
paritions - 3 because of history but starting from scratch, I'd only
have two - root (including boot) and /home.
I started using LVM when I had a much smaller disk (40GB). With the current 1TB disk, even with three accounts on the box, and expanding several partitions when moving to the new disk, I have still partitioned less than half the disk and that is less than 1/3 used. So, no, LVM is probably not an issue any more.

BTW, what is your third partition, and why would you not separate it now if starting from scratch?
I converted to mdamd raid as follows, IIRC.

Install the second disk, and parition it the way I wanted.
Create a one disk raid 1 partion in each of the new paritions.
Take down my system, boot a live system from CD, and use a reliable
copy program like rsync to copy each of the partitions contents to the
equivalent raid partition.
Run grub to set the new disk as bootable.  This is by far the
trickiest part.
Boot the new system and verify it's happy.
Repartion the now spare disk to match the new one if necessary.
You may need to zero the front of each partion with dd if=/dev/zero
to avoid mdadm error checks.
Add the partitions from that disk to the mdadm paritions and let mdadm
do its thing.

On 02/12/2017 07:08 AM, Bob Weber wrote:

I use raid 1 also for the redundancy it provides. If I need a backup I just connect a disk, grow each array and add it to the array (I have 3 arrays for /, /home and swap). It syncs up in a couple hours (depending on size of the array). If you have grub install itself on the added disk you have a bootable copy of your system (mdadm will complain about a degraded array). I then remove the drive and place it in another outbuilding in case of fire. You can even use a external USB disk housing for the drive to keep from shutting down the system. The sync is MUCH slower ... just coma back the next day and you will have your backup. You then grow each array back to the number of disks you had before and all is happy again. Note that this single disk backup will only work with raid 1.

So, how do you do a complete restore from backup? Boot from just the single backup drive and add additional drives as Marc Auslander describes, above?


One other question. If using raid, how do you know when a disk is starting to have trouble, as mine did? Since the whole purpose of raid is to keep the system up and running I wouldn't expect errors to pop up like I was getting. Do you have to keep an eye on log files? Which ones? Or is there some other way that mdadm provides notification of errors? I've got to admit, even though I have been using Debian for 18 or 19 years (since Bo), log files have never been my favorite thing. I generally only look at them when I have a problem and someone on this luist tells me what to look for and where.

Marc


Reply to: