Re: partitioning tools for LVM
On Thu, Jan 11, 2007 at 02:44:59PM -0700, Wesley J. Landaker wrote:
> On Thursday 11 January 2007 13:35, firstname.lastname@example.org wrote:
> > Are there any partitioning tools that happily deal in LVM on RAID?
> > parter, gparted, fdisk, cfdisk seem not to, as least fron what
> > documentation I've managed to find for them.
> The nature of your question suggests that you don't really understand how
> LVM works. Here is a quick primer and some links:
Actually, I do. The principles are clear. But I do have the perhaps
sloppy habit of talking about a logical volume as a partition, perhaps
because logical volumes seem to fulfil the same role in the system as
partitions -- they contain file systems.
> When using LVM, you first need Physical Volues (PVs). These are real
> partitions or drives. Some examples would be /dev/hda2 or /dev/sdb. To use
> a device as a physical volume, you typically just run pvcreate on it; in
> general, you also set it to an "LVM" type partition using fdisk and
> friends, but this isn't strictly necessary. Anyway, it will destroy any
> existing data on each partition or device you use for a PV. Setting up PVs
> is the ONLY time you'll ever use a program like fdisk or parted. For the
> rest, you use LVM tools.
The installer's partitioning phase also sets up logical volumes.
Perhaps this is what suggested to me that partition management
tools should handle LVMs as well.
I wish the installer's partitioning tool were available after
installation. It did take care of all the details correctly.
When I had previously used the lvm2 and RAID tools by hand, I did not.
There are too many data there now for me to be very willing to risk
finger-fumbling the lvm commands at this point. Although I suppose I
will if I have to.
> After you've created PVs, you will not ever use them directly. Instead, you
> group them together into a Volume Group (VG). A VG has a symbolic name you
> give to a group of PVs. You create one with vgcreate, e.g. if I wanted to
> create a VG called "vgmain" using two PVs I'd created previously, I might
> run "vgcreate vgmain /dev/hda2 /dev/sdb". You can also add and remove PVs
> on-the-fly later.
> *Finally*, you need to create Logic Volumes, which is the whole point of an
> LVM system (that's why it's *Logical Volume Management*). These are the
> actual volumes that you treat like you used to treat partitions, e.g. put
> file systems on them. You create LVs as part of a Volume Group that you've
> previously created, and give them symbolic names. You can add, remove, and
> resize them using lv* commands. For instance, if I wanted a 500M LV
> named "opt", I might run "lvcreate -n opt -L500M vgmain". Now I'd probably
> make a filesystem on it with "mkfs.ext3 /dev/vgmain/opt" and put a line in
> my fstab like "/dev/vgmain/opt /opt ext3 defaults 0 0". Later, I might add
> or remove more LVs, resize them, etc. I can do all of the online, although
> obviously the data itself on the LV (e.g. a filesystem) may need to be
> unmounted and/or resized first, although most filesystems can at least GROW
> online, while mounted.
> Anyway, see <http://tldp.org/HOWTO/LVM-HOWTO/> for a more in depth
> discussion. You can pretty much ignore anything that talks about "LVM1"
> unless you're working with a legacy system. There are also other systems
> like EVMS, but LVM2 is pretty much the mainstream.
Thanks or the referral. This howto is more complete than the last time
I looked at it, which will probably make a difference.
My reason for looking for an interactive tool (like the one built ito
the installer) is that I miserably failed to set up an LVM on RAID
before I used the installer's partitioning tool. I say "miserably",
because I think I actually ended up with inconsistent data structures on
disk. *somehow* I ended up with RAID not being recognised, which
resulted in complaints about duplicate LVM volumes. Deleting everything
ans starting over only made things worse, because apparently the LV data
structures were still being recognised on disk. Noting seemed to work
until I performed a destructive bad block check on the entire hard disk.
There were no bad blocks, but the entire disk was clean after that, and
there were no longer any problems to confuse the kernel or the
installer, which set everythin up correctly.
There was also confusion in the documentation between
(1) the commands to give the kernel to make it recognise RAID and
(2) the commands you use to create the RAID and LVMs on disk so that
they are there to ne recognised
(3) the configuration and startup files that issue commands in (1)
based on the columes created by (2).
And lvm.conf seemed to be optional. I gather from messages during a
recent etch upgrade that it is no longer optional. It was never clear
just which commands updated lvm.conf and which did not.
> Wesley J. Landaker <email@example.com> <xmpp:firstname.lastname@example.org>
> OpenPGP FP: 4135 2A3B 4726 ACC5 9094 0097 F0A9 8A4C 4CD6 E3D2