[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debbugs: The Next Generation



On Wed, 8 Aug 2001, Matt Zimmerman wrote:

> The schema is not implicit in the script, but from what I can tell, it is
> similar to what I came up with.  You've done some additional processing of
> email addresses, and you store package source/section/priority data in the
> database as well.

Look at schema.pm.  Or, psql -U debbugs debbugs on master.

What you mean to say here, is that all data is normalized.

> This is intended to work in concert with the flat file data, not replace it,
> correct?
>
> (It turns out you answered that question below)
>
> > On Wed, 8 Aug 2001, Matt Zimmerman wrote:
> > > I chose C++ because it's relatively easy to use C and C++ code from interpreted
> > > languages, but not to share code written in, say, Perl, with other languages.
> > > With the current debbugs, there seems to be a choice between using the Perl
> > > stuff or parsing the files yourself.  While munging the text files by hand
> > > isn't that unappealing, futzing with the database by hand is, so I would prefer
> > > to share that code.
> >
> > Those who work on debbugs want an easily maintainable system.  C/C++ do not
> > come to mind as being in that category.
>
> Are you claiming that the current debbugs code is easily maintainable?

No.  I am saying that it is easier to do something to a scripting language
version, than it is a compiled version.

> I have not tried to implement a powerful query interface yet, only basic
> package and bug-ID matching.  It would be no small task to create a query
> language powerful enough for a reasonable subset of conceivable BTS queries,
> and then translate it into SQL.  I've considered whether advanced reporting
> applications shouldn't just talk to the database directly, exposing some
> internal details in exchange for fast, flexible queries.

Please see http://doogie.org/~adam/bugs-query.ws.

> Did you plan to periodically do a full import, or simultaneously update both
> stores?  One of my goals was to speed up turnaround time, from receipt of the
> initial report through updated output in the web interface, and that was the
> rationale for making the database authoritative.  It seems easier to
> periodically translate database->flat files than to go the other way around.

It takes 6.5-8 minutes to do a full import on master(it recently got an
upgrade).  The perl process uses 50 some megs of ram.

77182 bugs(it includes archived bugs).  Only .status is in postgres, there is
no reason for .log to be there.

Also, look at /org/bugs.debian.org/spool/debbugs.trace.  This file is updated
whenever a bug changes state(which includes new bugs).  However, bug archiving
is currently not reflected in that file.

The end goal in implementing the about tracing feature was to allow for
dynamic updates to whatever index is implemented.  No one has yet written
something to take advantage of this tho(this feature is probably not even a
month old yet).



Reply to: