[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: UDD migration to ullmann.debian.org



On Wed, Jun 27, 2012 at 07:33:29AM +0100, Stephen Gran wrote:
> > I wonder whether we could now find a way to export the UDD bugs
> > tables as I have suggested and which was confirmed by Lucas:
> > 
> >    http://lists.debian.org/debian-qa/2012/05/msg00023.html
> 
> I see the advantage of having a udd replica set somewhere,

What I'm running is not really a replica of UDD.  I use
blends.debian.org to test new code which is not finished and thus can
not go to official UDD.  My point in cloning the bugs tables was that
for the purpose of testing the effort of having a >65GB clone of the
BTS as well does not seem to be rectified.

> if
> latency and read performance on ullmann is problematic.  As we haven't
> tried ullmann yet, though, it seems a little premature.  Also, it seems
> you are still loading up alioth with random jobs that have little to do
> with alioth's primary goal of being a software forge.  Are you having
> difficulty finding hosting space for your pet projects?

What you might call "random jobs" might be the daily script I'm running
to fetch machine readable data.  The code is in Blends SVN and can be
inspected here

   http://anonscm.debian.org/viewvc/blends/blends/trunk/machine_readable/fetch-machine-readable?view=markup

In some testing period I might have it run up to 10 times per day at
random times but the usual process runs once a day and takes less than
20min.  My actual idea was that not generating network traffic but doing
the fetching of local archives might be the most reasonable way to do
the job.  I have no idea whether you consider the inspection of the
hosted content fits to the primary goal and if you consider this a
misuse I would be happy to hear your suggestion how to do this better.
The data produced by this script is used to fill a UDD table
(blends-prospective).

> > I do see two options:
> > 
> >    1. Either generate the partial database dump in a cron
> >       job and move it to some http-accessible space or
> >    2. Get a passwordless sshkey which has the only single
> >       permission to trigger the dump and rsync the result
> >       (=call a dedicated scipt)
> 
> postgres has native replication support.  I don't see a need to reinvent
> the wheel here, do you?

No, I don't.  But I would also like to replicate some specific tables
and I'd be really happy to learn how to do this in the most reasonable
way.

> > What would your prefered solution to make a a dump of a few UDD tables
> > accessible to some other host?
> 
> What other host?

blends.debian.net .
 
Kind regards

     Andreas.

-- 
http://fam-tille.de


Reply to: