[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Extremely large level 1 backups with dump



>> In an earlier message, I said:

K> This box has ~630,000 files using 640 Gbytes, but not many files change
K> hourly.

>> On Mon, 6 Dec 2010 21:33:01 -0700, Bob Proulx <bob@proulx.com> said:

B> Note that you must have sufficient ram to hold the inodes in buffer cache.
B> Otherwise I would guess that it would be hugely slower due to the need to
B> read the disk while reading directories.  But if there is sufficient ram
B> for filesystem buffer cache then it will be operating at memory speeds.
B> For anyone trying to recreate this goodness but wondering why they aren't
B> seeing it then check that the buffer cache is sufficiently large.

   Here are some machine specifics for perspective.  It's an IBM x3400,
   2 Xeon 2GHz CPUs, 4Gb memory running RedHat.  It has 8 WD4000KS 400-Gb
   drives, 16Mb buffersize, 300 MBps transfer rate, Serial ATA-300, 7200 rpm.

   me% free
                total       used       free     shared    buffers     cached
   Mem:       1943948    1576708     367240          0     295304     947900
   -/+ buffers/cache:     333504    1610444
   Swap:      2096472        336    2096136

   me% cat /proc/meminfo
   MemTotal:      1943948 kB
   MemFree:        689696 kB
   Buffers:        394412 kB
   Cached:         461864 kB
   SwapCached:          4 kB
   Active:         500328 kB
   Inactive:       419836 kB
   HighTotal:     1179008 kB
   HighFree:       681396 kB
   LowTotal:       764940 kB
   LowFree:          8300 kB
   SwapTotal:     2096472 kB
   SwapFree:      2096136 kB
   Dirty:             512 kB
   Writeback:           0 kB
   AnonPages:       63836 kB
   Mapped:          24212 kB
   Slab:           321088 kB
   PageTables:       4436 kB
   NFS_Unstable:        0 kB
   Bounce:              0 kB
   CommitLimit:   3068444 kB
   Committed_AS:   196428 kB
   VmallocTotal:   114680 kB
   VmallocUsed:      3304 kB
   VmallocChunk:   111332 kB
   HugePages_Total:     0
   HugePages_Free:      0
   HugePages_Rsvd:      0
   Hugepagesize:     4096 kB

   The default setup for read-ahead on the drives was 256 blocks.  I did
   some testing and found my sweet spot was 16k:

      root# blockdev --getra /dev/sda
      256
      root# blockdev --setra 16384 /dev/sda

   The default device scheduler was Completely Fair Queue:

      root# cat /sys/block/sda/queue/scheduler
      noop anticipatory deadline [cfq]

   CFQ is 1% faster than elevator for a single user.  I found a web link
   claiming that, in multi-user tests with 4 users, deadline had 20% better
   performance.  At boot time:

      echo "deadline" > /sys/block/sda/queue/scheduler

   All filesystems are ext3, mounted like so:

      rw,nodev,noatime,nodiratime,data=journal

   They have the largest journal size possible, 400 Mb.  In /etc/sysctl.conf:

      # Better write performance, avoids unnecessary paging.
      vm.swappiness = 10

   Ran a time test concurrently with 49 active Samba sessions:

      date
      touch -d yesterday /tmp/TIME
      # "list" holds 8 340-Gb filesystems, from 42-67% full.
      find $list -newer /tmp/TIME -print | wc -l
      date

   Results:

      Tue Dec  7 13:34:05 EST 2010
      541
      Tue Dec  7 13:39:40 EST 2010

-- 
Karl Vogel                      I don't speak for the USAF or my company

Adam was a Canadian.  Nobody but a Canadian would stand
beside a naked woman and worry about an apple.          --Gord Favelle


Reply to: