[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Shared /usr over NFS, - how does this work? [WAS: Move all to /usr]

Hello Wouter and *,

since August 2011 I run an Experimanental-Cloud with 20 IBM eServe  x345
and 40 IBM eServer x335...  Enough machines to play with it.

Since 3 weeks I now have my two  400V/32A/3P  CEE  Wallets  for  my  two
Severracks in my office.

Am 2011-10-13 19:38:12, hacktest Du folgendes herunter:
> "Provide a simple way of mounting almost the entire system read-only and
> share it between multiple hosts to save maintenance and space," is what
> that wiki page says, but I'm not convinced. In theory, you can already
> share /usr between multiple systems today; but nobody does it, because

I was thinking on this, but HOW does this work with the config files?

Is thre a Debian HOWTO which descibe this?

If software change and the old config is not more  compatibel  with  the
nw binaries, your system could become instable...  and runing a  Cluster
or a Cloud with a view dozen, some 100 or 1000 Servers will  bring  down
our entired system.

Is thee a HOWTO how to solv this?

However, all of my systems have

    /dev/sda1	 1000 MByte	Rescue
    /dev/sda2	 xxxx MByte	swap
    /dev/sda3	10000 MByte	/tmp

    /dev/sda5	 2000 MByte	/Production_1
    /dev/sda6	 3000 MByte	/Production_1_var_log

    /dev/sda7	 2000 MByte	/Production_2
    /dev/sda8	 3000 MByte	/Production_2_var_log

So, normal I run "Production 1" (mounted as /) and have always  a  seond
system too bootup.  If something goes realy wired, I a boot "Rescue".

Now I can update the Second (not runing) production system  without  any

> - Keeping your software on a central fileserver introduces a single
>   point of failure that you don't have if you don't do the central
>   fileserver thing

Yeah!  If it goes down, you have no network anymore...

Iprefer this fileserver in an heavyly secured environment  and  sync  my
production systems from there if needed.

> - Moving more off / and into /usr does not free you of the need to
>   synchronize stuff across your systems (you have less to synchronize if
>   you only need to do /etc, but that's actually the hardest part to
>   synchronize)


My "rsync" on /usr take only 10-20 seconds per server (the sync  server
has 10GE interfaces to the internal switch and the serves  are  hanging
with there second Eth on this switch)

> - Frankly, in today's world, the amount of storage you need for your
>   software often pales in comparison to the amount of storage you need
>   for your data. I've rarely had to maintain a network of more than just
>   a few systems that had more than 10G worth of software locally
>   installed. When was the last time you bought a 10G hard disk? If
>   you're still having / be on local disk, you're still going to need a
>   local hard disk. Let's say you can still find a 146G SAS disk
>   somewhere -- that leaves you with 136G of wasted space anyway.


In some of my systems I use 4 GByte CF-Cards with an  SATA/PATA  adapter
because I do not need more Diskspace (my master DNS Server is such case)

> I think it's a bad idea.


> The volume of a pizza of thickness a and radius z can be described by
> the following formula:
> pi zz a


Thanks, Greetings and nice Day/Evening
    Michelle Konzack

##################### Debian GNU/Linux Consultant ######################
   Development of Intranet and Embedded Systems with Debian GNU/Linux
               Internet Service Provider, Cloud Computing

itsystems@tdnet                     Jabber  linux4michelle@jabber.ccc.de
Owner Michelle Konzack

Gewerbe Strasse 3                   Tel office: +49-176-86004575
77694 Kehl                          Tel mobil:  +49-177-9351947
Germany                             Tel mobil:  +33-6-61925193  (France)

USt-ID:  DE 278 049 239

Linux-User #280138 with the Linux Counter, http://counter.li.org/

Attachment: signature.pgp
Description: Digital signature

Reply to: