[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: managing multiple machines



On Mon, 19 Nov 2001, Kelley, Tim (CBS-New Orleans) wrote:

> I would say if you're gonna go ahead and share /usr you may as well go
> diskless.

I think you are probably right about this.

> OR: run a centrally managed group of apps over X remotely (this could
> get messy tho) this way they all run on the same machine

This is a computational cluster -- the clients are "thick" so that each
can independently run a CPU- and memory-intensive job. In fact, we
a considering using Mosix. Running jobs on the server would defeat the
purpose of the cluster.

> However what is the problem you're having with the machines having their
> own /usr?  Can't you just have a "standard" group of packages that each
> machine gets, then update every night from there?

This is fine until the standard changes; then I have to go into each
machine and adjust it. This is bad enough if everythingh is a debian
package, but suppose I use -MCPAN to get a perl package. Or compile
something locally? (Actually this isn't so bad -- I already share
/usr/local.) And if I get a new machine, how do I know I have reproduced
the software on the others EXACTLY? It's a nightmare.

> Mounting just /usr over nfs is going to have non trivial reprocussions
> with dpkg I would think.  That is usually what /opt is for and probably
> why debian does not use it.

I don't understand this, but I certainly want to! Why would dpkg care or
even know if the directory it is writing to is shared out over NFS?



Reply to: