[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: master under v. high load, bug WWW updates temporarily every 4h



Jason Gunthorpe <jgg@gpu.srv.ualberta.ca> writes:

> Just for discussion, a ps -aux --cumulative right now, here are the
> interesting things:
> andrew   17885  1.0  2.6 17276  1668  ?  S     06:08   7:04 rsync --server --se
> iwj       8385  8.3 23.9 16160 15128  ?  R     15:39   6:30 perl /debian/home/i
> maor     28649 18.3  1.3  1504   824  ?  S     14:51  23:00 sh /debian/home/mao
> qmaild   20796  0.3  0.1   796    80  ?  S    Feb  4  37:16 /usr/local/bin/tcps
> qmaill    1294  0.2  0.1   720   124  ?  S    Jan 20  77:40 /usr/local/bin/logg
> qmailr    1296  8.3  0.1   856   104  ?  S    Jan 20 2784:11 qmail-rspawn 
> root        13  0.3  0.0   796    44  ?  S    Jan 20  105:58 update 
> root      1208  0.1  0.2   832   168  ?  S    Jan 20   65:12 /sbin/syslogd 
> root      1228 16.2  0.2   824   164  ?  S    Jan 20 5422:14 /usr/sbin/inetd 
> root      1240  1.0  0.2  1072   172  ?  S    Jan 20  337:10 /usr/sbin/apache 
> root      1243  0.6  0.3   812   236  ?  S    Jan 20  209:43 /usr/sbin/tcplogd
> root      1250  0.0  0.3   812   224  ?  S    Jan 20   28:50 /usr/sbin/icmplogd
> root      1258  2.9  0.7  1372   456  ?  S    Jan 20  999:25 /usr/sbin/sshd 
> root      1286 20.7  0.2   840   128  ?  S    Jan 20 6938:46 /usr/sbin/cron 
> root      1295  1.3  0.1   868   116  ?  S    Jan 20  456:26 qmail-lspawn 
 
> This is pretty much everything with a sizeable ctime. First off, it is
> interesting to note that their are three sources of load, inetd, qmail and
> cron jobs. The 5000m on inetd is actually pretty sickening, maybe we
> should be using proftpd (master gets alot of ftp hits) and some sort of
> faster ident server..

Why are we using cron jobs for the regularly to do things, that last
long!? It's very difficult to put them at the right time so that they
aren't conflicting. Why aren't we using one shell script that runs
these jobs one after another. So master has only to work on one big
job. This should reduce the normal load of the machine *and* each job
will be done faster as the machine isn't switching between processes
that are using the same resources like the hard disc. The simultaneous
usage kills all caching and other tuning mechanism.
 


I like the idea of reducing masters load by providing a more tree like
mirror architecture.

"James A.Treacy" <treacy@debian.org> writes:

> Tim Sailer would be willing to let his machine be used for the same
> purpose, but he's too short on disk space.

Can't we use some of out donations to give this machine more disk
space. Up to date mirrors are IMHO important for Debian.

Bye
  Christian

-- 
Christian Leutloff, Aachen, Germany         leutloff@sundancer.oche.de  
      http://www.oche.de/~leutloff/         leutloff@debian.org      

            Debian GNU/Linux - http://www.de.debian.org/

Attachment: pgpgHlNv6Mq2k.pgp
Description: PGP signature


Reply to: