[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Sort of OT: network logins



On Mon, Jan 27, 2003 at 08:44:55PM -0500, Neal Lippman wrote:
> I am looking for an approach to the problem of having multiple
> installations of debian on each computer on my lan that I use. While it
> is certainly reasonable to have a minimal install on each system,
> consisting of a basic debian system, it seems counterproductive to have
> to install each program that I use on each workstation, rather than
> having such software "served" by a central applications server.

Nitpicky point of terminology, but one which is actually relevant
here:  What you go on to describe is still a fileserver, just one
that happens to be used to store files which are executable.  In my
experience, "application server" refers to a remote machine on which
applications are run at the request of the local machine.

That said, have you considered using X as the basis of an actual
application server, with software running on the app server and
displaying themselves on the local workstations?  This can be done at
two levels:  Using ssh with X tunnelling, in which case you are
normally using the local workstation, but can connect to the app
server to run specific problems; or using XDMCP, which essentially
turns the local workstation into a pure display terminal, running all
software on the app server.

Having implemented something like this at work, with a mix of smart
terminals using ssh and dumb terminals using XDMCP (and even a few
machines running both simultaneously in separate X servers), I have
to say that the resource requirements for the app server are much
lower and the overall system performance is much higher than I had
expected.  And as far as administration, well... the software is only
installed in one place, so no synchronization or update hassles.

> My present setup consists of a fileserver which exports various
> directories via nfs, including both a network-wide data store (called
> /share, for lack of a better idea),

LFS defines "share" as a (sub)directory for architecture-independent
shared data.  From what little you mention here, that seems
appropriate, although I would personally mount it under
/var/local/share rather than /share.  But that's purely a matter of
taste.

> By way of example, the workstation mounts server:/home onto /nfs; my
> home directory on the workstation (/home/nl) is a symlink to /nfs/nl.
> This way, no matter which workstation I log into, I have my global
> /home/nl directory. Network-wide logins are handled by nis.

Why do you mess around with the symlinks instead of just mounting
server:/home under /home on the workstations?

> Since /etc would be local to each workstation, the same install could
> conceivably be used by each system with it operating differently because
> of different config files (X comes to mind here, since hardware may
> differ).

Aye, and there's the rub...  If you install everything to an app
directory on a fileserver to be run on local workstations, you then
have to keep the relevant bits of /etc and /var up-to-date yourself.
This would likely give apt fits as well (apt on the workstations
wouldn't have any idea of what's actually installed) and may the gods
help you if you upgrade the server to a new version which isn't
backwards-compatible with the config files on (some or all of) the
workstations...

> A problem, however, is that (as far as I can
> tell) KDE does not understand multiple simultaneous logins, and
> therefore I risk file corruption (or worse?) if I log in twice to my
> account at the same time.

Are you sure about that?  I see odd directories (can't recall the
name offhand) appear in the homes of KDE-using users which appear to
be related to the KDE object model and include the hostname of the
machine they're running from.  Oh, yeah - .dcop-hostname or something
close to it.  I don't do KDE myself and none of my users run on two
machines simultaneously, so I can't say how well it works, but this
appears to be intended to address the situation you're describing.

> Theoretically, I would need to do this for any porgrams that cannot
> sucessfully sync shared storage (like evolution), however - so this
> isn't really a good overall solution.

You know, this reminds me of the locking problems I used to have with
mutt...  Are you sure your problem isn't at the NFS level rather than
the application level?  If you're running the user-space NFS daemon,
it doesn't support file locking.  (Or at least it didn't the last
time I checked.)  Build yourself a kernel on the file server with NFS
support and install the kernel-space NFS daemon instead, if you
haven't already done so.  That should take care of most of your
concurrency issues.

> Any advice, pointers to references, etc, thoughts greatly appreciated.

In theory, it's certainly possible to install your binaries on a file
server and run them locally on each workstation.  I've considered it
myself and decided that it would most likely be even more trouble
than managing local installations of each application.  (Under
debian, at least.  If you were building everything from source, then
it could well be worthwhile, but just running apt-get on each
workstation would be easier than trying to make sure that /etc and
/var are always in sync with /usr.)

If you want to centralize your application installations, consider
going to a full-blown app server, either with X apps tunnelled over
ssh or by setting up XDMCP terminals.

-- 
The freedoms that we enjoy presently are the most important victories of the
White Hats over the past several millennia, and it is vitally important that
we don't give them up now, only because we are frightened.
  - Eolake Stobblehouse (http://stobblehouse.com/text/battle.html)



Reply to: