[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Processed: destruction of round-robin functionality is fucking up our mirrors and making Debian suck for many people, hence fixing this is a release-critical "wish"



On Tue, Dec 18, 2007 at 03:35:51PM +0100, Josip Rodin wrote:
> On Mon, Dec 17, 2007 at 07:51:18PM +0100, Martin Schulze wrote:
> > > I've asked DSA for server-status already, and mentioned the logs too,
> > > we'll see (they haven't replied yet).
> > Server status is configured on localhost.
> OK, so I started measuring that too, and the rates for the last half a day
> or so are:
> * villa: 20.4 rps, 6.18 Mbps
> * lobos: 18.9 rps, 6.23 Mbps
> * steffani: 40.0 rps, 15.92 Mbps
> The ratios for both parameters are matching the general bandwidth ratios,
> so the measurements should be correct.

As of, umm, 2007-12-18 08:30 UTC (about 20 hours ago), testing users
should be starting to hit each mirror equally. So for future numbers,
we should have a noticable change which should result in all the testing
users assigned to classes B and C appearing in class A.

The numbers so far have gone:

    villa:     4.29 (19%) ->  5.33 (21%) ->  6.18 (22%)
    lobos:     3.91 (17%) ->  4.92 (20%) ->  6.23 (22%)
    steffani: 14.86 (64%) -> 14.58 (59%) -> 15.92 (56%)

The calculations give:

    A   = 18.84 MB/s (67%)
    B   =  9.64 MB/s (34%)
    C-F = -0.15 MB/s (-1%)

That's obviously a pretty odd outcome for C-F, and is due to lobos
getting more traffic than villa, which shouldn't be happening according
to RR+rule9. I guess means that the random factors amongst about 30% of
hosts (1/3rd of the hosts in class A/B) are playing a bigger role than
the entirety of class C (about 5% of hosts at last estimate), ie, we
have an random noise factor of about 17% (about 6.51 MB/s in total)...
That's not unreasonable given the usage patterns for security.d.o,
though I was hoping they'd cancel out better :(

Working with requests rather than b/w, which will have noise due to the
number of packages needing an update rather than the total size of the
packages and Packages files needing an update, gives:

    A   = 52.2 rps (66%)
    B   = 22.6 rps (28%) 
    C-F =  4.5 rps ( 6%)

which is closer to what I'd expect given previous estimates, though still
notably different to the earlier 55%/40%/5% split based on bandwidth. Note
that A included all unstable users who'd upgraded in the past week or so,
as well as 0.0.0.0-127.255.255.255 hosts. In future it will include all
testing users who've upgraded since the 18th UTC, up until the DNS change.

Cheers,
aj

Attachment: signature.asc
Description: Digital signature


Reply to: