[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: join us!



On Fri, Sep 15, 2000 at 05:05:11PM -0700, Seth Cohn wrote:
> 
> No, the best way would be to contact the _maintainer_ of the package in 
> question.  Ultimately, that person(s) is responsible for the software on 
> behalf of Debian.
> Debian has an excellent system to not only find out WHO maintains the code,
> but how to contact them.  Either by bugreport programs, website info, or 
> just dpkg- s <package>, you will get the info you need to contact the right 
> person.
>
In an ideal world we would have the time to turn over every stone
before we ever said a thing.  I don't think any of us can claim
innocence here.
 
> >Basically, you are forking development.  There is now a version to be
> >found in all the standard places where you get the tar-balls, and
> >another version to be found in Debian.  But they both have the same
> >version number.  This is misleading information.
> 
> No, they are NOT forking.  Consider: the only way to get the Debian version 
> is from a Debian site.  Debian packages are set up as orig.tar.gz and a 
> Debian Patch.
> This prevent the exact problem you are talking about.
>
This is a good start.  Really it is.  I still think you're duplicating
effort, but it is a very good thing that you're making very clear what
you've changed by shipping along diffs.

Part of my job, however, is making Linux more accessible to newcomers.
To me, it's not that far-fetched to suppose that somewhere down the
road someone will look around, see Debian, see the old version
numbers, see the issues associated with those versions, and ask the
same question.

So I'd definitely encourage you to tell people up front that you
backport selected improvements, seeking to produce the most stable
packages possible.

The obvious question I'm failing to answer is where and how you should
tell people.  Perhaps you should insert this in a statement of your
development philosophy.  Debian is, in part, a distribution for, and I
don't mean this negatively, idealists.

So, as idealists, in the positive sense of the word, you can make a
statement that you support free software; this is why you separate
free from non-free.  And you can state your method of development in
your quest for stability.  If it's well written and succinct, I can
hope that people will actually read it.
 
> >First, you are forking development.  You are applying code from future
> >modifications to old software.  This poses a significant risk of
> >introducing bugs which will not be reproducible anywhere except in a
> >Debian environment.  This cuts off the non-Debian part of the open
> >source community in cooperating to resolve problems.
> 
> No, this allows Debian to ship 'known stable' but still 'security-hole' 
> with a minimum of problems.  Given the choice between a patched hole of 
> version 1.2stable and 'secured' 2.0beta, the choice is clear.

beta yes.  But if I recall the examples correctly, you were three or
four versions out of date.  Other distributions ship out of date
versions too, but my impression is that most of them aren't quite as
far out of date.  My reading of this is that you're hanging on to old
code too long, spending too much time fixing bugs that have already
been fixed, and that this is, perhaps, part of the reason you are
taking so long to release a distribution.

My perception (please do correct me if I'm wrong) is that you have
more developers than any other distribution.  I also perceive that
you're shipping at least three times as many packages as anybody else.
I don't think these are trivial advantages; you've got a lot going for
you here, where no one else comes close.

But my perception of Slink was that it was so old, it was broken.  I
remember all too well, a bunch of people in my office, installing
Debian, running to me because I had a floppy disk with a working
dhcpcd.  I think you desperately need to shorten the time between
releases and I think you have the resources to do it.

>  Keep in 
> mind, 99% of the time, the current up-to-date version is in unstable, and 
> can and will be used by those who want it (they can recompile it for stable 
> if needed).
>
Rephrase that to "most up-to-date" version, and we agree.
 
> As for non-reproducible anywhere but Debian, this is why Debian has a 
> bug-tracking system.  And a maintainer who will track and fix those 
> problems, since they are the patcher in most cases.
>
Ah, the duplication of effort I already spoke of...
 
> >Second, you are duplicating effort.  Even if your backports of bug
> >fixes can be cleanly applied to the old code, you still must test
> >them.  In some cases, it will not be possible to apply these backports
> >cleanly.  This will require development which has already been done in
> >the main fork.
> 
> See above.  Development of new things might be worse than a bug-fix.
> What if 2.0 breaks things that 1.2 had working?  (ie XF864.0 vs 3.3.6)
>
XFree86 4.0 was admittedly a problem.  The XFree86 team said so when
they released it and specifically advised people seeking stability to
stick with 3.3.6.  They didn't bury this information either.  It was
right there on their home page.  I see this as an exception that
proves the rule.

> Someone took responsiblity to patch things.  It's not just 'hey, here's a 
> patch'.
> It's "The maintainer felt that this patch was critical to the software, 
> even tothe point of a backport."
>
This is a statement that sounds good.  But it overlooks the
difficulty involved when patches cannot be applied cleanly, or worse,
when you find out the hard way that some patches couldn't be applied
cleanly.  I think you're duplicating a tremendous amount of effort.
 
>  > Backporting specific fixes to earlier releases is not only not "a
> > > horrible way to do things", but is absolutely de rigueur in the
> > > industry.
> >
> >You overstate this.  Some very valuable improvements are indeed often
> >backported.  Far more often, the answer to software problems is,
> >instead, get the latest version.
> 
> USB 2.2 backport versus 2.4 USB in kernel
>
You've chosen an interesting example.  I heard there were a lot of
problems with it in the 2.2 kernels, largely because the backport was
taken before the code was really ready in its intended version.

I am remembering David Hinds, of PCMCIA fame, who advised me that he
very often advises people who encounter difficulty to get the latest
version of the pcmcia-cs package.  He didn't explain this, but I can
see what he's talking about.  You see the same thing on the Linux
kernel list.

Sometimes bugs go away, seemingly, on their own.  It can happen when
somebody looks at some ugly code and decides to clean it up.  He
decides what this ugly mess is supposed to do and writes clean code to
do it.  As long as he understands correctly what the mess is supposed
to do, there is a very good chance that the clean code will be more
efficient and more robust.

In these cases, older versions are not more stable; they contain more
ugly hacks.
 
> > >  You can't afford to put the entire set of potentially very
> > > destabilizing changes into a current or almost-current product!
> >
> >How can you be so confident that your backporting/forked development
> >model introduces significantly fewer destabilizing changes?  Have you
> >any statistics to validate this?
> 
> Yes, it's called the Debian bugreport and Debian's history of security.
>
Securityportal.com published a comparison of distributions at
http://www.securityportal.com/cover/coverstory20000724.html  I see
this is also by Kurt Seifried.  He didn't really include Debian in the
comparison; even if he had, you would presumably accuse him of bias.
I'm left searching for an impartial comparison.

> >And I believe you are, somewhat, begging the question.  How many
> >people running the 2.0 kernel chose it for new installations?  Are
> >they simply running the 2.0 kernel because they choose not to fix what
> >isn't broken or, given a new system to set up, are they choosing a 2.0
> >kernel for its vaunted stability?
> 
> Yes.  and it's size, and it's known issues (or lack thereof)
> 
We'll have to agree to disagree here, unless you have a poll of 2.0
users to back this up.  I don't know anyone who chooses to install a
2.0 kernel, unless they were installing Slink; then they were
installing Slink-and-a-Half as soon as it could be found.  The
footprint would certainly be a legitimate issue, if we find a lot of
people choosing 2.0 for that reason.

But my sample is skewed.  I live and work in San Francisco.  Most of
the people I know (and all of the ones who would install Debian) can
easily afford systems where the size of the kernel is not a particular
concern.  I can see where your mileage might vary.
> 
> Hope this clears up your questions.
> 
I think we can say this narrows our disagreements.  But then I'm known
for my stubbornness!

-- 
David Benfell
benfell@greybeard95a.com
ICQ 59438240 [e-mail first for access]
---
There are no physicists in the hottest parts of hell, because the
existence of a "hottest part" implies a temperature difference, and
any marginally competent physicist would immediately use this to
run a heat engine and make some other part of hell comfortably cool.
This is obviously impossible.
                                -- Richard Davisson
 
					[from fortune]

		 

Attachment: pgpfqZDWGOpmZ.pgp
Description: PGP signature


Reply to: