[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: GSLB global server loadbalancing - possible?

On Sun, Oct 17, 2010 at 9:28 PM, Enrico Weigelt <weigelt@metux.de> wrote:
* Mario Kleinsasser <mario.kleinsasser+debian@gmail.com> schrieb:

> At work we have an Apache loadbalancer (mod_jk, reverse proxy, for intranet)
> which is encountering about 54 million requests from all over Europe. We've
> also some branches in North America and Southeast Asia. This is a huge
> corporate network an I would like to implement some kind of global server
> loadbalancing so that clients could connect (to the domain) even when the
> main data center isn't responsible.

Load balancing and desaster failover are completely different issues.

Yes, I know. Currently we have in one sinlge central location an Apache acting as load balancer for an "portal" application hosted on multiple (four) Tomcats. The farm is useing about +160 Tomcats but this isn't important. Important is, that this self programmed portal has also implementet the Citrix API (beside others RSA etc.). Therefore this portal should reside in different countries, because the terminalserver farms are also in different countries. So in "worst" case the country with the terminalservers in this country is like an island. Let me say there is a terminalserverfarm in warsaw Poland and another is in cologne Germany.

Normaly the users are connecting to all farms through europe but in desaster case they should be able to connect to there "local" farm (priority) and they should be able to use always the same domain name to use the portal that is providing the applications. Dont care about the portal, it is intelligent enough to know where it is running or at minimum it is able to spot what is working at whats not.

The main thing is, how to provide the same domainname like portal.company.com in different locations with maybe different ip addresses? Bind zones would be an option I think.

If your corporate system is quite big, I'd probably advise BGP-based
failover (take care of properly resetting tcp sessions, etc) to a full
mirror datacenter - that's eg. what one of my customers (a major German
ISP and mass-hoster) does. For geographical load balancing you could use
multisite announcements (like eg. akamai does), but that needs proper
support by the whole systems architecture (multimaster synchronization
over hi latency links, etc).

Jep, we are currently testing BGP (of course anycast) but this isn't easy to manage in an MPLS environment through multiple countries and different providers :-)
This is a bit, let me say "complicated" and it's a bad feeling because standards aren't always standards....
> So before we will purchase a commercial solution (F5, Netscaler, foo, ....)
> I would like to ask how you would build up such a configuration based upon
> open source?

It's not a matter of invidual products, but the systems architecture.
We'll need to know *much* more about it before we could give any advise.

Yes, your right. Let us begin with a statement like: I would like to use provider independet techniques like DNS, like to use the "same" combination of Apache (LB) and Tomcats as a bundle in different locations with the firmness that connecting clients are useing the "nearest" or "cheapest" portal and as fallback useing maybe one central portal.

> I guess it will be a combination of Bind (availability for views to connect
> the source IP to different DNS resolutions), Linux-HA and maybe a self
> written script(?) based logic to manipulate the components.

Might be good tools for that. But can't tell without more information.

If you want, I would make a short draft to make things clear.

> Anyone who have already implemented a similar solution? Any advice for
> know programs?

Yes, indeed. For example, mirroring cluster node's storage space via
DRBD (beware: synchronous write performance over the ocean is *really*
bad. the commercial proxy, essentially a buffered pipeline, helps a bit,
but still suboptimal) - we already have some concepts for a transaction-
based replicated blockstore (which also includes cheap snapshots, etc),
but not yet the resources to actually implement it.

No need to sync data that way. The portal software is able to store all needed data in local Derby DB. And the data used by the portal could be "older" and must not be highly synced. If the primary portal where the user is connected crashes, the users have to close the browser, fire up a new one, create a new session in the "next" location ans login again. But the domain have to be reachable in any case....

I hope this is quite understandable?

Thanks for your answer!


PS: greetings from Austria :-)



 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weigelt@metux.de
 mobile: +49 151 27565287  icq:   210169427         skype: nekrad666
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme

Reply to: