[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[Debconf-video] server settup and network testing

Yesterday and today Richard and I worked on a number of videoteam
servers and network tests. Below are some notes from the whiteboard

-lisa's bios waits for a keystroke on boot. make it not do that.
-make sure grub is installed to both disks in root raid1 pairs
-recreate lisa's larger fs w/-m0 and smaller inode count
  the 861G of dc8 video on barney only use 810 inodes.
-find 2nd drive for ned
-blow out dust from intake filters and fan outflows.
-we have 15 available static IPs. P or S can take more from dhcp dynamic
range if needed for us.
MAC addresses to P for static IPs:
ned: 00:30:48:d6:1f:be
lisa: 00:23:ae:87:a3:56 (uncapped: for going out of CU)
barney: 00:1e:8c:25:e6:fe (uncapped: for rysnc to offsite video mirror)

-Static IPs to -admin for munin and icecast and what else?

-we need a homer (or somewhere else) to run dc10.debconf.org DNS (amongst
other things (but far fewer other things than most years))

-PyCon machines should be arriving within a day or two, at that point
edrz can install, test and get MACs to appropriate authorities to
request static IPs.

Notes: These three machines reside now in the coffee room until end of

barney: quad-core xeon, 8GB RAM (w/-bigmem kernel) 
  4x1TB raid5 /dev/md1 on /srv yielding 2.8  
    3xDebConf owned, 1xedrz
  for testing purposes there is a copy of the dc8 video data there 
  (can be deleted whenever it's no longer useful and/or we need the space)
  swap, / and ~675G are on a pair of edrz's 750G drives 
  9.2G / raid1 /dev/md2
  675G raid1 /dev/md3 temporarily on /mnt/ could be /srv/archival/ or ?

lisa: dual core xeon, 4GB RAM 
  2x750GB drives 
    (smaller drives would suffice, but we didn't have any on hand)
  9.2G raid1 /
  670G raid1 /srv 
    (remake fs: lower reserved %, lower inode count could give ~5GB more)
    can be used for stream dumps perhaps and/or "replay" source material

quad-core xeon, 8GB RAM
has 1x250GB PATA drive, but set up as raid1 w/missing drive so we can
add another should we find one.

-if 2 quad core machines aren't enough for the transcoding, mrbiege has a
new quad-athlon here and edrz has one still in VA.

-Once we have our backported packages re-built, signed and published we
can start running transcode performance tests on the dc8 DV

dvswitch network testing from Davis:
using dvsource-file w/loopdc10f.dv running dvswitch on edrz's laptop
from davis seems to work fine. running 2xdvsink-files, 1 each on ned
and lisa (barney was busy copying files)

(if mail formatting messes this up, check it on the wb)

                                         My traceroute  [v0.73]                 
umbra (
Thu Jul 15 15:50:28 2010
 Host                                                         Loss% Snt   Last   Avg  Best  Wrst StDev
 1. mudd-edge-1-vlan86-1.net.columbia.edu                      0.3% 7117    0.4   8.2   0.3 245.4  29.3
 2. dhcp-13-248.cs.columbia.edu                                0.3% 7116    0.4   0.7   0.3 269.0   9.8

umbra == edrz's laptop
248 == barney

dvswitch network testing from Interschool Lab:

same test as in Davis, but IL has a port on the CS net-13 (same as
coffee room servers), so one less hop than above. my battery died, so I
lost the mtr stats. :-/ They were good, though. :)

i.e. we're on the same segment. over a similar time frame there was 0
packet loss. last, avg, best, wrst were all much better than Davis and
more stable. Though Davis was still adequate to run 2xdvsink-files for
several hours. (we'll only need one dvsink-command)

Also tested bulk transfers from Interschool. Basically, it doesn't look
like that network will be a bottleneck. We consistently had 100+mbs
avg. with peaks to 500mbs. probably with more hosts to send to/receive
from we could have used up more bandwidth.

the lighting there seemed a little bit brighter than I had remembered,
but after turning off the front row that washes out the screen, it was
clear we still need to obtain supplemental lighting.

Reply to: