[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Beowulf cluster (was: parallel clusters of single cpu boxes)



hi ya..

my only worry about "automated swaps" for failed systems is depending
on why it failed in the first place...you can automatically/blindly 
take a good system and blow it up too by putting it live instead
of checking why it died in the first place... just being paranoid...

usually.....the systems will die when you are not there anyway....
so they should have 24x7 coverage in either case....otherwise...
i claim it has to wait... till the next shift comes in...or be on
call ... which is expensive for most people...cannot just drop the
"diinner date" and to check on what is usually a false alarm...

fun stuff....tricky stuff...
even worst if its real-time transaction based 24x7 w/ 99.9999% uptime
etc...

c ya
alvin

and nope...dont have the time.... have tons of machines for testing
though..... well enough to make a suitable test cluster....


On Thu, 22 Mar 2001, Kevin Long wrote:

> Hey Keep me posted (if you would) about how your setup works.  (Sounds like
> you may have the time to test soon.  Me, it will be 1st of Apr at the
> soonest)  If by some miracle I get to work on it I will describe the entire
> process to let you know of any pitfalls.
> 
> I have studied this a bit, and am following the list (as well as Linux
> Virtual Machine cluster's list)
> I'd be glad to toss around ideas as I am banking on this along with
> heartbeat saving me any downtime -- otherwise I'm off to the tried-and-true
> (albeit slow cron+scp method and pray).
> 
> The thing I am looking out for is what to do if one does go down.  How can I
> get the raid-1 to sync back up right.
> ----- Original Message -----
> From: "Alvin Oga" <aoga@Mail.Linux-Consulting.com>
> To: "Kevin Long" <klong@protelco.net>
> Cc: <debian-user@lists.debian.org>
> Sent: Thursday, March 22, 2001 6:32 PM
> Subject: Re: Beowulf cluster (was: parallel clusters of single cpu boxes)
> 
> 
> >
> > hi kevin...
> >
> > wow... cool... haven't seen or heard of nbd...but....
> > if it does the trick.... humm maybe the time is here for
> > cheap-easily accessible clusters  for high volume web/email servers ??
> >
> > -- only problem now is time to spend to build up the clusters
> >    and pull the (ethernet or power0 plugs on a few boxes and see if it
> >    keeps working...
> >
> > thanx
> > alvin
> > http://www.linux-1U.net ....
> >
> >
> > On Thu, 22 Mar 2001, Kevin Long wrote:
> >
> > >
> > > ----- Original Message -----
> > > From: "Alvin Oga" <aoga@Mail.Linux-Consulting.com>
> > > To: "Darryl Röthering" <drothering@hotmail.com>
> > > Cc: <debian-user@lists.debian.org>
> > > Sent: Wednesday, March 21, 2001 1:29 AM
> > > Subject: Re: Beowulf cluster (was: parallel clusters of single cpu
> boxes)
> > >
> > > > - am insterested too in...
> > > > - if system21 fails...( simulate it with pull the power plug )
> > > > what happens..
> > > >
> > > > - how to keep "data syncrhonized" on the cluster
> > >
> > > I am looking into the same thing.  I want to keep -- dare I say it --
> mail
> > > spools ready for a hot swap over to my secondary server.  I haven't
> fully
> > > tested yet, but I was assured that by using Enbd (network block device)
> I
> > > could Raid mirror across the network.
> > >
> > > I can't seem to find what I did with the mailing list address, but look
> for
> > > Enhanced NetBlock Device
> > >  http://www.it.uc3m.es/~ptb/nbd/
> > >
> > >
> >
> 
> 
> -- 
> To UNSUBSCRIBE, email to debian-user-request@lists.debian.org 
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> 



Reply to: