[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: moving LVM logical volumes to new disks



Hi

On Wed, Nov 12, 2014 at 10:09:43PM +0100, lee wrote:
> Hi,
> 
> what's the best way to move existing logical volumes or a whole volume
> group to new disks?
> 
> The target disks cannot be installed at the same time as the source
> disks.  I will have to make some sort of copy over the network to
> another machine, remove the old disks, install the new disks and put the
> copy in place.

Having to do this over the network makes it slightly
complicated.... But not impossible.

> Using dd doesn't seem to be a good option because extend sizes in the
> old VG can be different from the extend sizes used in the new VG.
> 
> The LVs contain VMs.  The VMs can be shut down during the migration.
> It's not possible to make snapshots because the VG is full.

Ok.

> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
> ServeRaid 8k.  No more than 6 discs can be installed at the same time.

Assuming that:

* both machines can be online at the same time

* there is a good network connection between them. The fatter the pipe
  the better

* both run Debian. Obviously

* The VMs are happy to (eventually) migrate to the new hardware box

Then there is a sneaky way, which can help minimize the downtime: LVM
and network block devices (or iSCSI. Either can work). Chunky,
slightly hacky, but worth considering.

The basic idea is:

* On the receiving machine, prepare the disks. Export the *whole*
  disks (or rather: the RAID device(s)) using nbd, xnbd or iSCSI.

* On the sending machine: attach the disks over the network, using nbd
  client, xndb client or iSCSI.

* On the sending machine: 'pvcreate' the disks, and 'vgextend' them
  into your volume group.  So you end up with a volume group that spans
  *both* machines. Some of the PVs will be accessed over the network,
  but LVM doesn't care. Obviously, the I/O characteristics of the
  "remote" disks will be a lot worse.

* Avoid running any LVM commands on the receiving machine just yet -
  if you did, it would see a partial volume group and probably
  complain like mad. It may even update the metadata on the PVs it
  *can* see to say that the "other" PVs are unavailable, which is
  tricky to fix.

* On the sending machine, use 'pvmove' to move each LV to the new
  disks of your choice. This will send them over the network.  This
  doesn't *require* any downtime on the VMs, but be prepared for slow
  I/O on them, as they will now (increasingly) be accessing stuff over
  the network.

* Once all your LVs have been moved, shut down the VMs on the sending
  machine and quiesce everything. You want to 'deactivate' the LVs with:

     lvchange -an vgname/lvname

  This will (amongst other things) remove the entries in /dev for the
  LVs, and make them unavailable.

* On the sending machine, use 'vgsplit' to split the volume group into
  two volume groups. The remote disks should be moved into a new
  volume group.

* On the sending machine: "sync;sync;sync". Just for paranoia's
  sake. Paranoia is good, and not a vice.

* On the receiving machine, run 'pvscan', 'vgscan' and similar: This
  should now see a complete VG.

* shut down the nbd client/xnbd client/iscsi client on the sending
  machine. You don't want the two machines accessing the same
  disks. Therein lies madness.

* Activate the LVs on the receiving machine ("lvchange -ay"), copy the
  VM definitions across (exactly how depends on your virtualisation)

* Start up the VMs. Pray that they have network etc as before.

* Profit.

I'm sure that there are (hopefully minor) details here that I've
forgotten (backups?), but it should give you the general idea.

Bottom line: Accessing disks over the network is perfectly possible,
if you are willing to live with the added latency. Not a good idea for
database servers or other IO intensive VMs.

It may be a better alternative than extended downtime.  As an
administrator, you get to make that trade-off.

Hope this helps
-- 
Karl E. Jorgensen


Reply to: