[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: repartitioning software raid1 -- remotely?



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi will,
Doing stuff like this remotely is fun ;)

I would recommed that you use LVM to manage the size of your "partitions" so 
you can simply assign space to wherever you store your data easily.

Also I would recommend upgrading the kernel to the latest 2.4 series before 
you start playing with partitions, you can also enable lvm support..


> so i (in indiana) am thinking i can
Install new kernel
> - split the raid (in boston) back into two hd* drives,
> - repartition the non-booted one,
into / of about 500M to 1G, swap of whatever and the remainder into a single
  partition
use _mdadm_ to create your raid arrays on the non-boted disk
  (i say mdadm because it doesnt need a config file, and imho its easiest)
turn the large raid array into a lvm pv, create a vg and a few lvs
  (explained http://www.tldp.org/HOWTO/LVM-HOWTO/)
> - shuffle stuff over to the new partitions,
which are now lvm logical volumes
edit fstab! (for non-booted system)
> - reconfigure lilo,
grub would be better because it enables you (or your client) to edit the boot 
  params at the boot prompt
> - boot from the newly-partitioned drive,
> - repartition the first drive to match the booted one,
sfdisk -l /dev/hdc | sfdisk /dev/hda
where hdc is the LVM+Raid disk and hda is the disk with ugly partitioning
> - re-establish raid parameters,
> - lilo some more,
or grub
> - and then reboot again.
>
> is that a sane/possible approach?
perfectly. just make sure your client has someone who is happy to recieve a
  phone call from you talking through how to fix stuff if things dont go to
  plan
> since we're NOT anywhere near the client machine, this seems to
> be a reasonable way of repartitioning the thing, remotely. if
> not, other pointers welcome.
>
> so how do we split the raid up without borking the remote
> computer into a non-bootable/non-reachable state?
if you have a raid1 array of /dev/hda1 and /dev/hdc1 you can mount both the 
member partitions as if they were not part of the raid array. 

> <dmesg snippet="in case it helps">
> VFS: Mounted root (cramfs filesystem).
> Freeing unused kernel memory: 128k freed
> md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
oo i see you boot form initrd with md support already.. fun :)
>  [events: 00000014]
>  [events: 00000014]
> md: autorun ...
> md: considering hdb3 ...
> md:  adding hdb3 ...
> md:  adding hda3 ...
If its possible it would be a very good idea to get hdb moved to another ide 
bus, in the current configuration performance is going to be seriously bad 
because all writes have to be written twice down the same ide bus, so your 
write performance is half that of a single disk.
If the disk were moved the write performance will be that of a single disk, 
read performance should probably improve, although that depends on how 
paranoid the md raid 1 driver is at making sure the data its giving the 
kernel isnt corrupted.

Hope this helps.

- -- 
David Leggett
david@asguard.org.uk
Get my public GPG key from http://www.asguard.org.uk/~david/gpg/david.asc
Fingerprint: 56E3 5457 49DA 60D8 A2D6 9199 EE8A F3B1 0ADA D289
 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFBJr0J7orzsQra0okRAtdyAJ9CGRZyOpKs9N41hn1UjpNTI/nQpQCgtkEr
D29jJ/YjAbsAheYYWgWHVrs=
=/+tK
-----END PGP SIGNATURE-----



Reply to: