Re: Enterprise and Debian Pure Blends
Hi all:
On Wednesday 01 September 2010 20:31:06 Russ Allbery wrote:
> "Jesús M. Navarro" <jesus.navarro@undominio.net> writes:
> > Another server can be your "identity management" one, which could
> > service LDAP and Kerberos KDC for instance.
>
> I think running LDAP and a KDC on the same system is a very bad idea from
> a security perspective.
I know, I know. I even think to remember one of the original Athena papers
stated that the KDC was meant to be secured by means of putting it into a
closed room with two big armed gorillas in front of the door.
But even then, it's a matter of ballance: for lots of environments it's good
enough to put all identity/auth management services into a single box
(probably replicated, ballanced, etc.) and then taking some care of that box
(that's paraphrasing Bellovin: put all your eggs in a single basket, and be
very careful about that basket)... but I think it's a bit of disgressing...
> > Yes and that's a problem hardly managed (today) at the distribution
> > level and one that, say, Microsoft has focused on from the begining.
> > Debian, for instance, is a very good system for "at-the-box" level
> > management and as such, very "system administrator friendly" but lacks
> > almost completly at the "site" level (at install time: is this going to
> > be a "stand alone" server, an identity management server, an integrated
> > in a "domain" workstation...? and for day-to-day administration: group
> > the boxes per service family, group-manage debconf params and packages
> > installed, cross-system integration, cluster configuration...)
>
> I think this is all true, but I also do want to note that all that
> user-friendliness is much less helpful when you're talking about a large
> enterprise scale. I think it's an interesting problem to solve for
> smaller sites, but for example Stanford wouldn't use any of that even if
> it were at the level of quality that Microsoft provides.
Uhmmm. I wouldn't be so sure. It certainly would look like that looking
backwards, but think that Stanford takes quite a lot from "the old days" much
like MIT's Athena. Both of them were the pioneers, with so much success that
they still pull a lot of legacisms that can be tracked back about 30 years.
With regards of open source, unix-like systems versus Microsoft, the former
have the very great advantage that properly engineered (the unix way) they
can offer standardized, easily integratable (is that even a word?) "bricks"
(with a brick being a whole computer) while at the same time retaining enough
flexibility when you really need to go the taylor-made route (with emphasis
in "really": it's my experience that too many times, system administrators
fall in a version of the "not invented here" syndrome and do things "the way
they know" not because things couldn't be done a different way but because
systems are flexible enough to allow for our own little cargo cults -see
above about putting a local KDC along the LDAP master server or not, for
example).
> > I know that going this path is not exactly a trivial task, to say the
> > less, and that each "site level" decision comes at the price of reduced
> > flexibility (I said it before, but Debian Edu offers quite a nice
> > example) but that's why I defend that going the "Best Practices" howtos
> > and documentation is the proper point to start with (offers guidance for
> > those that care or need without reducing flexibility when you know you
> > need to go out the paved road).
>
> Right. I think documentation is more useful than automation in this area,
> since particularly at the larger size, enterprises won't all fit in the
> same box, or even a reasonable number of boxes.
Again, I wouldn't be so sure. That's the way *now*, true, but I'd say that's
mainly because of lacking of standards and enough maturity, so almost
everybody, specially the big ones, needed to invent their own way.
Look at the way aviation, electricity, landline communications, etc. evolved
and you'll see the future of IT: we are just now going past the era of the
pioneers, where almost everything needed to be done taylor-made because of
the lack of anything better, and starting the era of standards, best
practices, abstractions and modularity just like now almost no company
deploys its own electricity central or nobody builds a plane or a building
from taylor-made components but from standardized ones.
Certainly this doesn't happen overnight (things at the lower level like TCP/IP
or PC hardware standardized earlier, Microsoft has its own ideas about "big
picture" integration, etc.) but it seems time has matured for the unix world
(and thus, linux distributions -see efforts like those sponsored within Red
Hat's Fedora project) to focus on wide integration, from early attempts like
those expressed in sites like infrastructures.org or deployments like those
from MIT, Stanford or Carnegie-Mellon, to current orchestration or devops
efforts.
My point is: what can I (we) do within Debian to take on that train?
Cheers
Reply to: