[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: deprecating /usr as a standalone filesystem?

Philipp Kern <trash@philkern.de> writes:
> On 2009-05-06, Russ Allbery <rra@debian.org> wrote:

>> I think it's pretty unlikely that *most* Debian machines are done
>> that way.  There are a lot better tools for keeping large numbers of
>> systems in sync these days than simple cloning from golden images,
>> and a lot of drawbacks to the golden image approach.
> We do the same with ~12 clients.  One master image that's declared
> stable by rsyncing it using hardlinks[0] on the server and from there
> rsynced to the clients which reboot automatically if there are pending
> updates.  After the rsyncing it does local profile-based "patching".
> I wonder about the drawbacks of this because it works really nice for
> us.  (Of course there's the downtime problem, but that's no problem
> for us, as those are clients not servers.)

If you start getting node variation, it turns into a headache.  If
you're in a situation where you're assured of no node variation, it
works fairly well within that situation, but we want one solution that
works for *all* types of servers we run, whether clusters or one-offs or
smaller sets of load-balanced servers.

You can also get a slow accumulation of cruft in your golden image over
time, and if you don't keep good documentation, it's really easy to
discover that you no longer know exactly how to rebuild your golden
image if you need to (such as for a new OS release).

> But why bother to do a complete reinstall everytime something changed
> if you could just sync the delta.  (And yes, I'm roughly aware that
> there are something like softupdates in FAI too, but still.)

We don't do it every time something changes; usually we use Puppet to
push incremental changes.  We rebuild systems whenever we repurpose them
or whenever we do a major OS upgrade.

I like rebuilding systems from first principles for exactly the same
reason that I like recompiling the whole Debian archive.  It tests your
process.  Having a complete process for building a system rather than a
static system image that you may or may not be able to reproduce makes
it much easier to migrate to new releases of the OS (because you can
layer most of your policy on top of the new release), change any part of
the process, etc.

I've done this pretty much every different way you can with a lot of
versions of UNIX: golden images, portions of the file system in network
file systems, specific change application scripts, everything with
native packages, mixes of native packaging and configuration management
systems, etc.  For a fairly heterogeneous mix of servers that may
include some clusters of identical systems, I think FAI plus a good
configuration management system like Puppet is the way to go.  It makes
me feel the most comfortable about the upgrade path, the testing of the
whole system, and the robustness of the environment.

Russ Allbery (rra@debian.org)               <http://www.eyrie.org/~eagle/>

Reply to: