[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: best practice for lvm?



On Thu, Jun 04, 2009 at 08:23:05AM +1000, Alex Samad wrote:
> On Wed, Jun 03, 2009 at 01:46:27PM -0500, Boyd Stephen Smith Jr. wrote:
> > In <[🔎] 20090603174408.GA25275@m364d1.ece.northwestern.edu>, Zhengquan Zhang 
> > wrote:
> > >Can I say the best practice for lvm is to create a single partition for
> > >the harddrive and single PV on it
> 
> [snip]
> 
> > You definitely want separate LVs for any partition (non-system) users can 
> > write to, to avoid running out of space on your / partition.  I usually go 
> > overboard and have separate partitions for:
> > /boot      # If / is on LVM; not LV
> 
> I would suggest to never put / or /boot on a lvm partition and at most
> to put it on a raid1 set. Why incase something goes wrong, raid1 i much
> easier to dissect then lvm (and especially lvm on raid)

Does that mean, lvm on raid is easier to dissect than lvm alone?

This is my setup, /boot on raid1 and not on lvm, /root and /home are lvm
on raid1.

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg-root   4.6G  1.9G  2.5G  44% /
tmpfs                1008M     0 1008M   0% /lib/init/rw
udev                   10M  104K  9.9M   2% /dev
tmpfs                1008M     0 1008M   0% /dev/shm
/dev/md0               92M   24M   63M  28% /boot
/dev/mapper/vg-home   910G  372G  492G  44% /home


> 
> > /usr
> > /usr/local # For OS migrations.
> > /home
> > /opt
> > /srv
> > /var
> > /var/tmp   # RAID 0 or other "fast"
> > /var/cache # RAID 0 or other "fast"
> > /tmp       # Usually tmpfs; no LV
> > 
> > >and leave enough unassigned PE for later enlargement of certain LV?
> > 
> > It is much easier to expand a filesystem than to shrink it.  This is true 
> > even if you aren't using LVM.
> 
> 
> 
> -- 
> "I want to thank you for taking time out of your day to come and witness my hanging."
> 
> 	- George W. Bush
> 01/04/2002
> Austin, TX
> at the dedication of his portrait



-- 
Zhengquan


Reply to: