Re: sf.net redirector reports 500 Internal Server Error
On Sat, May 16, 2009 at 09:52:02PM -0500, Raphael Geissert wrote:
> Replying to put an end to this.
> Bart Martens wrote:
> > On Tue, 2009-05-12 at 20:33 -0500, Raphael Geissert wrote:
> >> So you want merkel to download three html pages every time the redirector
> >> is called?
> > Yes, three or more.
> Then better make the maintainer specify the final url in the watch file; it
> would avoid the maintenance burden on the qa.d.o side.
Then the maintainer does not use the redirector. I remember that you were
against that in a previous e-mail. Can you elaborate on this ?
Also, in my opinion, the sf.net redirector is meant to take away that burden
from the debian package maintainers. It is better that debian/watch files
having http://sf.net/project/file-(\d.*)\.tar\.gz simply keep working via the
redirector. When sf.net changes their website, only the redirector needs an
> Another reasoning explained in a different email.
> >> DEHS currently has 1564 watch files that use the redirector, UEHS
> >> (Ubuntu's DEHS) also got some (no way for me to tell how many), any
> >> maintainer, DD, automated system might be using it.
> >> Sticking with only the number of watch files in DEHS, and since the watch
> >> files are checked at least every four days it would mean the redirector
> >> would have to download at least 8211 pages every week, 183MBs (120KBs for
> >> the three pages).
> > No need to check all files every four days.
> Four days is a sensible period of time, remember that DEHS data is expected
> to be up to date when using it in other QA processes. Stale data is
> useless, would annoy, or even cause side-effects (e.g. false positives.)
> > Results from anyone using the redirector can be fed back to DEHS.
> That's not the way DEHS works.
With "can" I did not mean to say that DEHS is currently capable of this. I
meant that DEHS could be adapted to work like this.
> > Also, checking should slow down
> [... more stuff that won't be implemented, since pointing to the final url
> containing the files listing would be the "correct" approach ..]
Let's not confuse "no need to check all files every four days" with the reason
for existence of the redirector.
> > same result without actually checking every time.
> >> Only to provide a feature that most people don't need,
> > It's about addressing the issue of "the only remaining sf mirror that
> > keeps the redirector currently working".
> No, what you are proposing is to address an insignificant feature request to
> allow a maintainer to check files in a given download group.
No, I'm proposing to address the issue of depending on the only sf.net mirror
that keeps the redirector currently working. Let's not confuse that with the
part "Later on, a nice-to-have would be...".
> The way to
> address the real issue is, like I've already said several times, to contact
> sourceforge and reach an agreement.
As I wrote before, it is possible without such agreement, although probably
> >> not to mention that it would be extremely easy to break?
> > Why would it ?
> Care to look at the history of watch files and sourceforge? even the reason
> behind the creating of the redirector?
Which parts would I have missed that would explain why the new redirector would
be more "extremely easy to break" than the existing redirector ?
> >> And if you are to do that then why you don't simply take over
> >> DEHS? oh, and write the watch files four spec and implement it.
> > I prefer to join the team and to enjoy fixing DEHS together as peers
> > instead of taking over DEHS.
> If you really prefer that why we never heard back from you when we told you
> we preferred you to first send some patches to the ML (which was decided
> after you failed to explain your real intentions in spite of the many times
> we asked you, and failed to provide php code which is DEHS' language, a
> patch, and a reason why we would need such a diverging change in DEHS
> instead of in uscan.)
Let's not confuse my previous attempt to join the DEHS team with the current
> >> I once wrote a script to let watch files obtain the version information
> >> from freshmeat and the kde-apps (and similar) sites which only required
> >> one web page fetch, and nobody ever replied in spite of sending a couple
> >> of pings on the ML and on IRC, poking people, and ... nobody ever
> >> replied.
> > Is this still a problem today ? I'm not sure why you mention this here.
> I mention this because in spite of being the only way to allow watch files
> to use information from those websites (contrary to the sf case) and
> requiring only one request per query (contrary to the at least three
> requests per query approach you propose) it was never accepted.
I'm having difficulties with some long sentences.
You wrote "I once wrote a script" and "nobody ever replied". And now you wrote
"it was never accepted". Well, in my opinion, if you've found a way to improve
efficiency with freshmeat and KDE, then it's well worth considering. Do you
need help with it ?