[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: dgit and upstream git repos



On Tue, Oct 07, 2014 at 08:26:45AM -0700, Russ Allbery wrote:
> I understand why you feel this way, particularly given the tools that
> you're working on, but this is not something I'm going to change as
> upstream.  Git does not contain generated files, and the tarball release
> does, because those two things are for different audiences.  Including the
> generated files in Git generates a bunch of churn and irritating problems
> on branch merges for no real gain for developers.  Not including them
> makes it impossible for less sophisticated users to deploy my software
> from source on older systems on systems that do not have Autoconf and
> friends installed for whatever reason.

The flip side is that you can get burned by people trying to compile
from your git tree on either significantly older or significantly
newer system than what you typically use to develop against, and if
autoconf and friends have introduced incompatible changes to the
autoconf macros used in your configure.in.

I've gotten burned by this way many, many times, which is why I
include the generated configure script with respect to *my*
development system.  That way, developers running on, say, a RHEL
system, or developers running on some bleeding edge Gentoo or sid or
experimental branch won't lose so long as they don't need to modify
configure.in, and so, need to generate a new configure script.

It does mean that sometimes people lose because they need to build on
some new system, and so they need a new version of config.guess or
config.sub, and instead of simply dragging in a new version of those
files, they try to regenerate *everything* and then run into the
incompatible autoconf macro change.  But if I forced people to run the
autoreconf on every git checkout, they would end up losing all the
time anyway...  This way they only lose when they are trying to
develop on some new OS class, such as ppcle, or make a configure.in
change *and* when the autoconf macros become backwards incompatible.

(Or maybe the answer is I should stop doing as many complicated,
system-specific stuff in my configure.in --- but given that I'm trying
to make sure e2fsprogs works on *BSD, MacOSX, and previously, Solaris,
AIX, and many other legacy Unix systems, I need to do a lot of
complicated low-level OS specific tests, and those are the ones which
have historically had a very bad track record of failing when the
autoconf/automake/libtool/gettext developers made changes that were
not backwards compatible.)

> I say this not to pick a fight, since it's totally okay with me that you
> feel differently, but to be clear that, regardless of preferences, the
> reality that we'll have to deal with is that upstreams are not going to
> follow this principle.  I know I'm not alone in putting my foot down on
> this point.

Indeed, there are no easy answers here.  I personally find that
resolving branch conflicts (which you can generally do just be
rerunning autoconf) is much less painful than dealing breakages caused
by changes in the autoconf macros, especially since it's fairly often
that people are trying to compile the latest version of e2fsprogs on
ancient enterprise distros.

But of course, your mileage may vary, depending on your package, and
where your users and your development community might be trying to
compile your package.  (I have in my development community Red Hat
engineers who are forced to compile on RHEL, as part of their job.  :-)

Cheers,

						- Ted


Reply to: