[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Getting the list of all bugs against Debian as a file

Martin Quinson <martin.quinson@tuxfamily.org> writes:

> Hello,
> I would like to do some sort of grepping around to find the list of dormant
> l10n bugs as we are speaking on -devel. But I do not want to harass the
> debian servers with "useless" automated requests. So, I was wondering if it
> was possible to get the list of all opened bugs at a given time. 
> I do not need the content of the bugs, but their number, the package they
> refer to and their title. Once I'll get a list by harassing my machine of
> mixed greps, I'll check this list manually (thus implying a "normal" charge
> on the servers). If the content of the bug is also provided, that's fine.
> Is it possible somehow (given the fact that I'm not DD and thus cannot read
> files on servers if they are not publicly available) ? If this non-DD
> limitation is too restrictive and makes it impossible, I can ask DDs I know
> to send me the data, however.

You can do it this way but its one request per source with bugs:

w3m -dump "http://bugs.debian.org/~ingo/mrvnbugs/cgi/pkgindex.cgi?indexon=src&rfc822=yes"; \
 | grep "Source:" | cut -d" " -f 2- \
 | while read SOURCE; do \
   echo "; Bugs for $SOURCE"; \
   w3m -dump "http://bugs.debian.org/~ingo/mrvnbugs/cgi/pkgreport.cgi?src=$SOURCE&rfc822=yes"; \
    | grep -v "^;"; \
   done | tee BIG_LONG_BUGLIST.txt

Don't include that in any package. Its a temporary url (hopefully the
rfc822 option gets added to the bts soon) and its quite a few
requests. Its probably OK if you do this once in a while.

You can also store the list of sources with bugs (that a daily
generate index so its cheap) and check for changes on each run.

Another method would be to use the ldap interface. wnpp-mail uses that
to gather its info, look there for an example.

I was thinking about building a daily index of new and changed bugs
but I'm waiting for a responce to my first patch (the one running on
that url).


Reply to: