Re: How bandwidth requirement could be reduced when using thin clients?
On Wed, 06 Oct 2004, Ben Higginbottom wrote:
> As for the number of sessions running from a single server, a pure thin
> client setup needed one for every 20 or so clients. Nymph however could
> easily allow 50-60 sessions IIRC. The servers were dual Xeons with a gig
> of ram in each, and the TC's were any machine of a BogoMIPS of less than
> a thousand.
Skolelinux currently suggests a dual Xeon with 4GB RAM can support 50-60
thin clients. I gather it's really an issue more of RAM more than cpu
cycles at that stage.
> >Is it helpful to have CF in a dual boot machine? Of course if you have any
> >OS locally installed that's a (at least partial) solution to network
> >downtime but full local installs are great fun to maintain as I'm sure you
> Full local installs of linux arnt difficult to maintain though, once
> everyone is satisfied with our local installation setup, it'll be
> maintained and updated with a combination of a NFS apps server and
> apt4rpm (were using suse 9.1) running as a cron job. Rsync from an ideal
> client was considered, but judged to be too cpu intensive.
Well, my definition of a full local install would be a machine which needed
no app server, thin client server or whatever. Meaning a machine that can
function normally (net access accepted) when you unplug it's network cable.
I would consider the maintenance of that to be pretty considerable even if
you can use apt4rpm or rsync.
My own instinct still leans with the thin clients. As I see it (and it is
my opinion of course) if there is only one copy of the install and it sits
on the server, desktop machines can't have:
- filesystem issues
- disk problems
- any possibility for inconsistency between desktop software, config, etc
The server(s) can have these problems but then you fix them. It's really
one server to maintain and not 50 desktops.
In my experience, the most common hardware failure with any pc (aside from
the user) is the hard disk. If you are reliant on fifty of them, odds are
you'll have to replace a few at random (particularly on networks which let
hardware get old). Relying on only one or two good quality scsi disks
improves your odds considerably. If the few disks are RAID 1 and backed
up, you are reasonably prepared for such failures.
Anyway, it's clear both configurations have their uses. Enough of me
whittering on on public lists :-)