[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: "NFS Slave"??

> Yes, but to use rsync and scp, you have to either type all of the root 
> passwords, or set up keys to allow root to log in from one box to 
> another.

Not if you use remote boot nodes, you can do a local copy on the server.

> The first is extremely annoying, the second I'd rather not do 
> for security reasons: I don't want to make it easy for someone to root 
> my whole network by rooting one box.
> (Of course, with diskless, the whole cluster can easily be compromised 
> from the head node, but this is not the case for our network of 
> workstations.  If I'm using NIS for the latter, why not for the former too?)

There is no security problem for a cluster on a private net, specially if
it is a set of number crunching nodes sitting on shelves inside a locked
room. Your single security risk is always the front-end.

But we use this (a ssh-transparent root among all machines) even for the
cluster of terminals which is on the Internet. First, we can trust our
users in this case (no undergrad students involved...|:-) and second we
implement a mostly-closed access policy with the tcp wrapper, none of the
terminals accept any kind of interactive connections from outside our
Department's domain. Again, the security risk reduces to the servers.

For another cluster of terminals which is used by undergrad students, the
only difference is that we put the whole bunch in a private net. And we
don't have to open the ssh channels from the nodes to the server, only the
other way around, so even if someone breaks the setup password on a node,
boots from a floppy (which is disabled in the setup) and uses the correct
name and address, he still can't enter the front-end from within.

I think that these days, with debian-security, shadowing and the tcp
wrapper (plus MD5 passwords when they become compatible with NIS), the
major security threat always relates much more to the discipine of your
local users than to invasions from the net.

> The only problem I can think of is IRQ conflicts.  I'm not a BIOS guru, 
> so I've had to shuffle around the PCI cards on that machine to prevent 
> conflicts; if I add another card it would be hard to prevent this 
> problem.  (Darned interrupt-starved PC hardware!  I have more than 20 
> IRQs to play with on my $500 five years ago powermac clone!)

You don't have to worry about IRQ conflicts for PCI cards, two or more PCI
cards can share que same IRQ and Linux will be able to identify each one
by the position of the slot. And please don't use any ISA cards!...

> And we'll probably add a third cluster in another year or two.  So if I 
> can avoid tons of NICs on the main server, I will.

If at all possible physically, you might consider passing around a few
extra TP cables and cascading the switches of your several clusters...
Of course you will need to review the addressing scheme of each private
net in order to avoid conflicts...

        Jorge L. deLyra,  Associate Professor of Physics
            The University of Sao Paulo,  IFUSP-DFMA
       For more information: finger delyra@latt.if.usp.br

To UNSUBSCRIBE, email to debian-beowulf-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

Reply to: